<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Matt Martz</title>
    <description>The latest articles on Forem by Matt Martz (@martzmakes).</description>
    <link>https://forem.com/martzmakes</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/martzmakes"/>
    <language>en</language>
    <item>
      <title>Supercharging a Serverless Slackbot with Amazon Bedrock</title>
      <dc:creator>Matt Martz</dc:creator>
      <pubDate>Tue, 07 Nov 2023 13:30:50 +0000</pubDate>
      <link>https://forem.com/aws-builders/supercharging-a-serverless-slackbot-with-amazon-bedrock-5fag</link>
      <guid>https://forem.com/aws-builders/supercharging-a-serverless-slackbot-with-amazon-bedrock-5fag</guid>
      <description>&lt;p&gt;In the dynamic world of software development, staying abreast of changes and deployments is crucial for team collaboration and efficiency. In my previous post, &lt;a href="https://dev.to/martzcodes/from-code-to-conversation-bridging-github-actions-and-slack-with-cdk-2n5n-temp-slug-1306421"&gt;From Code to Conversation: Bridging GitHub Actions and Slack with CDK&lt;/a&gt;, I introduced a solution that used AWS Cloud Development Kit (CDK) to deploy a Lambda and DynamoDB-powered Slack App that gave teams push-button deployments between environments from Slack. Building on that foundation, this follow-up article delves into a significant enhancement the integration of Amazon Bedrock, AWS's generative AI service, to revolutionize how we handle commit logs and release summaries.&lt;/p&gt;

&lt;p&gt;The updated Deployer Bot is not just smarter; it's designed to be more responsive and informative by utilizing an event-driven architecture that streamlines notifications and summaries. By tapping into the power of generative AI, the bot now offers concise, human-readable summaries of commits and release notes, making it easier for teams to grasp the impact of their work at a glance.&lt;/p&gt;

&lt;p&gt;As an example... When I added this to my team's internal Slack Bot (based on the previous post's work). Bedrock provided this commit summary: &lt;strong&gt;"The commit enables the bot to summarize code changes and releases using AI via AWS Bedrock, including analyzing commits for risks and generating release prep recommendations between environments."&lt;/strong&gt;. My commit message was only "&lt;em&gt;bedrock... not handling brext w/ios though&lt;/em&gt;". 🤯&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwnskrrhgrtrp9b688tdc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwnskrrhgrtrp9b688tdc.png" width="800" height="459"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Commits trigger webhooks that flow through a series of AWS Lambda functions, orchestrating the process from commit tracking to AI-powered summarization, culminating in neatly packaged per-environment releases communicated via Slack. As part of the commit analysis and summarization, we get results like this on a completed deployment:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frm4jg1mxdzwfi56h5k0j.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frm4jg1mxdzwfi56h5k0j.png" alt="An example output of bedrock that does not recommend promoting code to the next environment because of a possibly breaking change." width="800" height="467"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In this article, we'll explore the rationale behind each architectural decision, the process of incorporating Amazon Bedrock into our bot, and the benefits that an event-driven model brings to our CI/CD pipelines. For the DevOps enthusiasts and the code-curious alike, the complete codebase is accessible on GitHub at &lt;a href="https://github.com/martzcodes/blog-cicd-bedrock-releases" rel="noopener noreferrer"&gt;https://github.com/martzcodes/blog-cicd-bedrock-releases&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;A quick note on costs: Amazon Bedrock and the models they run are NOT free. In my case, I've restricted the analysis to a limit of 10000 "tokens" so the max cost of processing a large commit or release for my case is about $0.10 - $0.20. Claude's upper token limit is 100k tokens which would have a max cost of $1-2 per model invocation.&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Revamping the Architecture: Embracing Event-Driven Design
&lt;/h2&gt;

&lt;p&gt;The transformation of the Deployer Bots architecture to an event-driven model marks a significant enhancement from its original design. This section will explore the rationale behind adopting an event-driven approach, the benefits it offers, and how it is implemented within the context of the Deployer Bot integrated with Amazon Bedrock for AI-powered commit summarization and release management.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Understanding Event-Driven Architecture (EDA)&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Event-Driven Architecture (EDA) is a design paradigm centered around the production, detection, consumption, and reaction to events. An event is any significant state change that is of interest to a system or component. EDA allows for highly reactive systems that are more flexible, scalable, and capable of handling complex workflows. It is particularly well-suited for asynchronous data flow and microservices patterns, often found in cloud-native environments.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Why Event-Driven?&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;The original Deployer Bot followed a more traditional request/response model, where actions were triggered by direct requests. While functional, this approach had limitations in terms of scalability and real-time responsiveness. The integration of Amazon Bedrock and the necessity to process and summarize commit data presented an opportunity to redesign the architecture to be more reactive and efficient.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Benefits of Event-Driven Architecture&lt;/strong&gt;
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Scalability&lt;/strong&gt; : EDA allows each component to operate independently, scaling up or down as needed without impacting the entire system.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Resilience&lt;/strong&gt; : The decoupled nature of services in EDA results in a system that is less prone to failures. If one service goes down, the rest can continue to operate.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Real-Time Processing&lt;/strong&gt; : Events can be processed as soon as they occur, providing immediate feedback and actions, which is crucial for CI/CD workflows.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Flexibility&lt;/strong&gt; : New event consumers can be added to the architecture without impacting existing workflows, allowing for easier updates and enhancements.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Implementing EDA in Deployer Bot&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;The integration of EDA into the Deployer Bot involves several key components working in tandem:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Event Sources&lt;/strong&gt; : These are the triggers for the workflow, such as GitHub webhooks for commits and deployments that initiate the process.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Event Bus&lt;/strong&gt; : AWS services like Amazon EventBridge can serve as the backbone of EDA, routing events to the appropriate services.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Lambda Functions&lt;/strong&gt; : Serverless functions respond to events, such as fetching, processing, and summarizing commit data, and orchestrating the workflow.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;DynamoDB&lt;/strong&gt; : Acts as the storage mechanism, logging events, and maintaining state where necessary.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Amazon Bedrock&lt;/strong&gt; : Provides AI-powered summarization of commits and releases.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo6kox46jlwz9em9937im.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo6kox46jlwz9em9937im.png" width="800" height="468"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In the original architecture (above) we did everything synchronously driven by webhooks. A GitHub Action CI/CD deployment would occur that would lead to a message posted in Slack with a button. If the approve button was clicked it would create the next environment's deployment in GitHub. There was no tracking of commits (or even what was in a release) and as a user of this system for several months, it was often hard to link the Slack messages back to the actual code that was being deployed (despite the commit SHA's being there). There was a lot of mental overhead. 🥵&lt;/p&gt;

&lt;p&gt;In the new architecture (below) we expand on this by isolating responsibilities between lambdas, making them simpler (do less) and tracking new information asynchronously.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxuh300jzv6y1gxmt594p.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxuh300jzv6y1gxmt594p.png" width="800" height="459"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Two parallel/asynchronous paths happen as part of the deployments in GitHub. The first path relates to the commit and the second relates to the deployment.&lt;/p&gt;

&lt;p&gt;Some notes on the architecture diagram:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;The red lines are for visibility only (to help highlight the paths when lines cross each other).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;We're only using the Default Event Bus&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Lambdas F, B and 2 are API Driven and in the Nested API Stack&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The rest of the Lambdas are Event Driven and in the Nested Event Stack&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For the first path with the GitHub Commit Webhook...&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;GitHub's commit webhook sends the push event to the &lt;code&gt;/github/commit&lt;/code&gt; endpoint&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The lambda makes sure that it was a commit to the main branch for a project we care about. It forwards an event to the event bus to process the commit message with the minimum information we need. It quickly responds back to GitHub with an OK status. (If we waited for the commit fetching/analysis via bedrock sometimes the lambda wouldn't respond quick enough and GitHub would think it failed).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Asynchronously the process-commit lambda fetches the actual FULL commit from GitHub which includes the patches made in this commit.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The commit w/patches are sent to Bedrock where we use &lt;a href="https://www.anthropic.com/" rel="noopener noreferrer"&gt;Anthropic's Claude v2&lt;/a&gt; LLM to summarize the commit into 1-2 sentences for a target audience of developers or product managers.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The commit (without the patches) and summary are then stored in DynamoDB for later release-querying.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The Commit path takes on the order of 30 seconds (frequently less) to complete. Meanwhile, since a commit occurred on the main branch and triggered GitHub Actions... the pipeline should be testing/deploying in the background. Once the GitHub Actions pipeline is complete it will send a deployment webhook:&lt;/p&gt;

&lt;p&gt;A. The GitHub Actions pipeline completes sending a deployment webhook to the API&lt;/p&gt;

&lt;p&gt;B. The Lambda that receives this webhook stores the deployment information in DynamoDB and emits two EventBridge events. One to send a message to the deployment channel in Slack and another to summarize the release.&lt;/p&gt;

&lt;p&gt;C. The track-release lambda fetches all of the commits that occurred in the environment since the last release. Here a release is considered a group of commits that were newly deployed in an environment. The dev environment releases are (usually) single-commits. Ideally test and prod would follow this pattern but frequently there's some lag and the test/prod releases end up being larger. &lt;em&gt;Note: this lambda also fetches the NEXT higher environment's commits (a sort of "draft" release) and those also get summarized. I should have spun this out into a separate lambda, but I'll leave that for future-me.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;D. With the release commits fetched, they're all sent together to Bedrock to be summarized. For a larger release, it ends up summarizing multiple commit summaries.&lt;/p&gt;

&lt;p&gt;E. With the release summary and next-env summary the track-release lambda stores the release notes in DynamoDB and sends an event to update the deployment's message with this new information.&lt;/p&gt;

&lt;p&gt;F. (Arguably this could be a third path... 😅) Once the user clicks the approve button, the &lt;code&gt;/slack/interactive&lt;/code&gt; endpoint emits an event to deploy the next environment.&lt;/p&gt;

&lt;p&gt;G. A lambda receives that event and triggers the GitHub Actions pipeline for the next environment.&lt;/p&gt;

&lt;h3&gt;
  
  
  Code Structure
&lt;/h3&gt;

&lt;p&gt;I'm not going to do a complete walkthrough of the code... because there is a lot of it. Instead I will highlight particular files of interest at a high level. Feel free to reach out on socials or in the comments if you'd like something explained more in-depth.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://github.com/martzcodes/blog-cicd-bedrock-releases/blob/main/lib/blog-cicd-bedrock-releases-stack.ts" rel="noopener noreferrer"&gt;blog-cicd-bedrock-releases-stack.ts&lt;/a&gt; - This stack creates the DynamoDB table and two Nested Stacks&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://github.com/martzcodes/blog-cicd-bedrock-releases/blob/main/lib/nested-api-stack.ts" rel="noopener noreferrer"&gt;nested-api-stack.ts&lt;/a&gt; - Creates a RestAPI backed by Lambdas for the Webhook Endpoints&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://github.com/martzcodes/blog-cicd-bedrock-releases/blob/main/lib/routes/webhooks.ts" rel="noopener noreferrer"&gt;routes/webhooks.ts&lt;/a&gt; - Defines the Endpoint structure for the lambdas and what permissions they should have via a common interface (nested-api-stack uses this to build the lambdas)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://github.com/martzcodes/blog-cicd-bedrock-releases/blob/main/lib/constructs/api.ts" rel="noopener noreferrer"&gt;constructs/api.ts&lt;/a&gt; - Creates the actual API and Lambdas w/their permissions and paths based on the &lt;code&gt;routes/webhooks.ts&lt;/code&gt; file&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://github.com/martzcodes/blog-cicd-bedrock-releases/blob/main/lib/nested-event-stack.ts" rel="noopener noreferrer"&gt;nested-event-stack.ts&lt;/a&gt; - Creates Event-Driven Lambdas and their Rules&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://github.com/martzcodes/blog-cicd-bedrock-releases/blob/main/lib/routes/events.ts" rel="noopener noreferrer"&gt;routes/events.ts&lt;/a&gt; - Similar to &lt;code&gt;routes/webhooks.ts&lt;/code&gt; this defines the Lambdas with their corresponding Rules and Permissions.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://github.com/martzcodes/blog-cicd-bedrock-releases/tree/main/lib/lambda" rel="noopener noreferrer"&gt;lambda/&lt;/a&gt; - This folder contains all of the lambda runtime eligible code. Files from inside of here should not be making imports outside of this file structure (other than external npm libraries). This is to help isolate the code and ensure we aren't accidentally bundling things into our lambdas we don't need. I've seen this a lot in teams that make heavy use of &lt;code&gt;index.ts&lt;/code&gt; files for imports 🤢&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://github.com/martzcodes/blog-cicd-bedrock-releases/blob/main/lib/lambda/common/bedrock.ts" rel="noopener noreferrer"&gt;lambda/common/bedrock.ts&lt;/a&gt; - The Bedrock Helper file which has the functions to do the various summaries.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;em&gt;😍 I've been using a similar pattern at work using Nested API and Nested EventBridge Stacks and am loving it. If you'd be interested in a dedicated post on that let me know in the comments!&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Prompt-Engineering for Commit and Release Summaries
&lt;/h2&gt;

&lt;p&gt;The integration of generative AI into the Deployer Bot's operations involved precise prompt engineering to ensure that commit and release summaries are informative and accessible. The focus was on creating concise yet comprehensive summaries tailored to the needs of both developers and product managers. The following discussion dives into how the code facilitates this process.&lt;/p&gt;

&lt;p&gt;The below prompts are in the lambda helper methods at &lt;a href="https://github.com/martzcodes/blog-cicd-bedrock-releases/blob/main/lib/lambda/common/bedrock.ts" rel="noopener noreferrer"&gt;https://github.com/martzcodes/blog-cicd-bedrock-releases/blob/main/lib/lambda/common/bedrock.ts&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Commit Summaries
&lt;/h3&gt;

&lt;p&gt;For commit summaries, I designed a prompt to guide the AI to provide succinct summaries that highlight the purpose and potential impact of the changes, particularly emphasizing backward compatibility and flagging possible breaking changes. The following TypeScript excerpt outlines this process:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// Function to generate a summary for a single commit
export const summarizeCommit = async (commit: string): Promise&amp;lt;string&amp;gt; =&amp;gt; {
  ...
  const prompt = `...Provide a 1-2 sentence summary of the commit that would be useful for developers and product managers...`;
  ...
};

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In the function &lt;code&gt;summarizeCommit&lt;/code&gt;, the prompt specifically instructs the AI to focus on a summary that is relevant to both technical stakeholders and decision-makers. This helps ensure that any non-backwards compatible changes are prominently reported, which is crucial for maintaining the integrity of the API.&lt;/p&gt;

&lt;h3&gt;
  
  
  Release Summaries
&lt;/h3&gt;

&lt;p&gt;The task of summarizing releases brings together multiple commits into a narrative that outlines the key developments and their implications. The &lt;code&gt;summarizeRelease&lt;/code&gt; function employs a carefully designed prompt to distill this information:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// Function to create a summary for a release
export const summarizeRelease = async (release: string): Promise&amp;lt;string&amp;gt; =&amp;gt; {
  ...
  const prompt = `...You will create a 1-4 sentence summary of the release below...`;
  ...
};

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here, the prompt emphasizes not only the inclusion of changes but also highlights the importance of metrics, contributions, and cadenceall of which are critical for assessing the release's impact.&lt;/p&gt;

&lt;h3&gt;
  
  
  Environment Comparison Summaries
&lt;/h3&gt;

&lt;p&gt;When preparing to promote changes from one environment to another, it's vital to understand the differences. The &lt;code&gt;prepRelease&lt;/code&gt; function encapsulates this through its prompt, which is structured to provide a recommendation based on the commits analyzed:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// Function to summarize differences between environments and provide a release recommendation
export const prepRelease = async ({
  ...
}): Promise&amp;lt;string&amp;gt; =&amp;gt; {
  ...
  const prompt = `...Make a recommendation for whether to promote or not...`;
  ...
};

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In this function, the AI is tasked not just with summarizing the technical changes but also with evaluating the suitability of promoting the release, incorporating a strategic aspect into the summary.&lt;/p&gt;

&lt;h3&gt;
  
  
  Utilizing Amazon Bedrock Runtime
&lt;/h3&gt;

&lt;p&gt;All these prompts are then passed to the Amazon Bedrock Runtime, invoking the model through &lt;code&gt;InvokeModelCommand&lt;/code&gt; with an input that defines the parameters of the AI's generation process, including token limits and stop sequences. These configurations are essential for controlling costs and ensuring the responses are concise:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const command = new InvokeModelCommand(input);
const response = await client.send(command);
...

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This snippet is a crucial part of the process, as it executes the command and handles the response from the Bedrock AI, translating it into a usable summary.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;ONE IMPORTANT NOTE:&lt;/strong&gt; At the time of this writing (11/6/2023) the AWS Lambda Runtime for NodeJS (18) does NOT include the &lt;code&gt;@aws-sdk/client-bedrock&lt;/code&gt; bundled into that. As an added "bonus" the CDK's &lt;code&gt;NodejsFunction&lt;/code&gt; construct (which uses &lt;code&gt;esbuild&lt;/code&gt;) by default makes &lt;code&gt;@aws-sdk/*&lt;/code&gt; as external modules. This means that &lt;code&gt;@aws-sdk/client-bedrock&lt;/code&gt; ends up NOT being bundled into the lambda. In order to get around this I needed to update our NodejsFunction props to bypass this. I also have to give the lambdas IAM access to invoke bedrock models. This can be done via by adding an initial policy to the lambda:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const fn = new NodejsFunction(this, `${endpoint.lambda}Fn`, {
  // ...
  bundling: {
    // Nodejs function excludes aws-sdk v3 by default because it is included in the lambda runtime
    // but bedrock is not built into the lambda runtime so we need to override the @aws-sdk/* exclusions
    externalModules: [
      "@aws-sdk/client-dynamodb",
      "@aws-sdk/client-eventbridge",
      "@aws-sdk/client-secrets-manager",
      "@aws-sdk/lib-dynamodb",
    ],
  },
  ...(endpoint.bedrock &amp;amp;&amp;amp; {
    initialPolicy: [
      new PolicyStatement({
        effect: Effect.ALLOW,
        actions: ["bedrock:InvokeModel"],
        resources: ["*"],
      }),
    ],
  }),
});

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Continual Evolution of Prompts
&lt;/h3&gt;

&lt;p&gt;It's important to note that these prompts are not static. They are subject to continuous evaluation and iteration, ensuring that the summaries remain pertinent and value-adding as the project and AI capabilities evolve.&lt;/p&gt;

&lt;p&gt;By embedding such targeted prompts into the Deployer Bot's workflow, the DevOps team ensures that the summaries generated are not only informative but also actionable, fostering a deeper understanding and facilitating informed decision-making throughout the development process.&lt;/p&gt;

&lt;h2&gt;
  
  
  Now let's try it!
&lt;/h2&gt;

&lt;p&gt;In order to test this and iterate on my prompt templates, I created a CICD-example project: &lt;a href="https://github.com/martzcodes/cicd-example" rel="noopener noreferrer"&gt;https://github.com/martzcodes/cicd-example&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This project uses OIDC and GitHub Actions to deploy the stack. In this case, I'm just deploying the same stack with an "environment"-specific name to the same account. To reset I would stash my changes and force-push to an earlier state and re-apply the stashes.&lt;/p&gt;

&lt;p&gt;Backwards compatibility is really important in software engineering, so one of the first things I wanted to focus on was that. I created a simple RestApi with a single endpoint pointed at &lt;code&gt;/dummy&lt;/code&gt;. On my first attempt at prompt engineering, I included a statement like &lt;code&gt;APIs must be backwards compatible, if they are not make a note of it.&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;I then did a deployment where I renamed the &lt;code&gt;/dummy&lt;/code&gt; endpoint to &lt;code&gt;/something&lt;/code&gt; (creative, I know). The response from bedrock specifically said this was backwards compatible / not a breaking change:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;The release for cicd-example in prod environment contains 5 commits:  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Adds an API Gateway API with Lambda endpoint
&lt;/li&gt;
&lt;li&gt;Defines the Lambda handler function
&lt;/li&gt;
&lt;li&gt;Updates API path from /dummy to /example (committed twice)
&lt;/li&gt;
&lt;li&gt;Updates API Gateway path resource from dummy to example
No breaking changes or bugs were noted. The API update is backwards compatible.&lt;/li&gt;
&lt;/ul&gt;
&lt;/blockquote&gt;

&lt;p&gt;A renamed endpoint could absolutely be breaking. After a few iterations, I settled on a prompt line like: &lt;code&gt;APIs must be backwards compatible which includes path changes, if they are not it should be highlighted in the summary.&lt;/code&gt; After that, I got much more reliable callouts for path changes*.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.hashnode.com%2Fres%2Fhashnode%2Fimage%2Fupload%2Fv1699333938913%2Fe9f8a6d5-414d-48ff-837f-6771847f1380.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.hashnode.com%2Fres%2Fhashnode%2Fimage%2Fupload%2Fv1699333938913%2Fe9f8a6d5-414d-48ff-837f-6771847f1380.png" width="800" height="221"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I also made an attempt for bedrock/Claude to detect misspellings in code. For example, I defined an endpoint called &lt;code&gt;/soemthing&lt;/code&gt; ... I could not find a prompt combination that would identify that misspelling... and in fact, in the summaries, it actually &lt;em&gt;corrected&lt;/em&gt; it (which is VERY BAD) 😬&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;The release adds a new API and Lambda function. A new /something endpoint was added to the API without breaking backwards compatibility.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;After installing it at work and running it for a day I asked my colleagues for feedback on the accuracy... and the feedback was very positive but it wasn't perfect.&lt;/p&gt;

&lt;p&gt;For example...&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;The XXXXXX repo in the dev environment released changes on 2023-11-06T22:26:41.517Z. It includes 1 commit which adds a new '/cognito/revoke' endpoint that could break backwards compatibility if clients are not updated. No other major changes or risks noted.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;New endpoints can rarely be a breaking change. Back to the drawing board, I guess 😅&lt;/p&gt;

&lt;h2&gt;
  
  
  Future Directions and Conclusion
&lt;/h2&gt;

&lt;p&gt;As we continue to explore the intersection of generative AI and DevOps, I can see a lot of potential for a GenerativeAI+Serverless Deployer Bot. The integration of AI-driven summaries for commits and releases is just the beginning. The future is poised for a host of innovative features that could transform CI/CD pipelines and development workflows, making them more efficient and intelligent.&lt;/p&gt;

&lt;h3&gt;
  
  
  Expanding AI Capabilities in DevOps
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Automated Code Review Assistance&lt;/strong&gt; : By refining our prompts, we could extend the Deployer Bot's functionality to include automated code reviews, where the bot could provide preliminary feedback on pull requests, analyzing code for style, complexity, and even security vulnerabilities.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Dynamic Troubleshooting Guides&lt;/strong&gt; : Generative AI could be harnessed to create real-time troubleshooting guides based on the errors and logs encountered during builds or deployments, providing developers with immediate, context-specific solutions.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Predictive Analytics for CI/CD&lt;/strong&gt; : Leveraging historical data, the bot could predict potential bottlenecks and suggest optimizations in the CI/CD pipeline, leading to preemptive resource management and smoother release cycles.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Personalized Developer Assistance&lt;/strong&gt; : AI could be programmed to learn individual developer preferences and work patterns, offering customized tips, reminders, and resources to enhance productivity.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Enhanced Onboarding&lt;/strong&gt; : For new team members, the Deployer Bot could become an on-demand mentor, explaining CI/CD processes, and codebase navigation, and providing answers to common questions through an interactive AI-driven chat.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;AI-Powered Testing and Quality Assurance&lt;/strong&gt; : Integrating AI to analyze test results could lead to quicker identification of flaky tests and provide insights on test coverage and quality, potentially predicting which parts of the code are most likely to fail.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Conclusion
&lt;/h3&gt;

&lt;p&gt;The integration of generative AI into the Deployer Bot represents a significant leap forward for DevOps teams. It is a testament to the transformative potential of AI when applied with precision and creativity. The Deployer Bot, once a mere facilitator of notifications, has evolved into a sophisticated assistant that enhances decision-making and streamlines workflows. Looking ahead, I am excited about the prospect of a more proactive, AI-powered assistant that not only informs but also predicts and strategizes, becoming an indispensable ally in the fast-paced world of software development.&lt;/p&gt;

&lt;p&gt;The current capabilities of the Deployment Deployer Bot lay the foundation for these advancements. I'm sure this will not be my last post on the matter. My recent trip to EDA Day in Nashville gave me a lot of inspiration.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>From Code to Conversation: Bridging GitHub Actions and Slack with CDK</title>
      <dc:creator>Matt Martz</dc:creator>
      <pubDate>Mon, 21 Aug 2023 14:13:31 +0000</pubDate>
      <link>https://forem.com/aws-builders/from-code-to-conversation-bridging-github-actions-and-slack-with-cdk-8ic</link>
      <guid>https://forem.com/aws-builders/from-code-to-conversation-bridging-github-actions-and-slack-with-cdk-8ic</guid>
      <description>&lt;p&gt;In this post, we're diving into the powerful world of automation using the AWS Cloud Development Kit (CDK) to create a serverless-backed Slack App. The goal? Seamlessly managing application deployments through &lt;a href="https://docs.github.com/en/free-pro-team@latest/rest/deployments/deployments?apiVersion=2022-11-28" rel="noopener noreferrer"&gt;GitHub Deployments&lt;/a&gt;. Here's a glimpse into how it all ties together:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;GitHub Deployments&lt;/strong&gt; notifies our serverless app, which consists of an AWS API Gateway Rest API backed by multiple lambda functions. One of these lambdas gets invoked by the GitHub Deployments webhook to handle deployment status updates.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Once informed, our bot relays these updates to a Slack channel.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Upon a successful deployment, the bot prompts users with, "Do you want to deploy this to the next environment?"&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Users have the option to greenlight this deployment or reject it.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;And when it comes to the all-important production environment, only selected approvers can make the final decision. Others can voice their views through a "vote".&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The best part? &lt;strong&gt;&lt;em&gt;Although our demonstration employs CDK, this framework isn't limited to CDK deployments.&lt;/em&gt;&lt;/strong&gt; You can adapt it for any project deployable via GitHub Actions, making it a flexible tool for various deployment needs.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp42xlhngh1rq1erqqj49.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp42xlhngh1rq1erqqj49.png" width="667" height="379"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Here's a structured flow of our approach:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Setting up a placeholder for secret tokens.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Constructing the CDK App with key components: RestApi, DynamoDB, and several lambdas.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Safeguarding the GitHub access token and Slack Bot token inside our placeholder.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Integrating the GitHub Deployment Webhook.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Ensuring seamless Slack interactions.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Incorporating GitHub Deployments within our GitHub Actions workflows.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Eager to dive into the code? Check out the project on &lt;a href="https://github.com/martzcodes/blog-cicd-slackbot" rel="noopener noreferrer"&gt;GitHub: https://github.com/martzcodes/blog-cicd-slackbot&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Note&lt;/strong&gt; : This project has multiple layers, and we won't be delving into each line of code. If you find any part lacking, don't be shy. Drop your questions in the comments or catch me on Mastodon at &lt;a href="https://awscommunity.social/@martzcodes" rel="noopener noreferrer"&gt;https://awscommunity.social/@martzcodes&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Create a Placeholder Secret
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Why a Placeholder?&lt;/strong&gt; : Before diving in, you might wonder why we're setting placeholders. When working with AWS CDK, there's a tendency to automate the creation of secrets. However, I've found this method can sometimes reset these secrets unintentionally. And although CloudFormation is an option, it would expose our Secret values in its YAML file. This is where manually set placeholders come in handy they ensure our secrets remain secret.&lt;/p&gt;

&lt;p&gt;Let's get started:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Head over to the AWS Console and &lt;a href="https://us-east-1.console.aws.amazon.com/secretsmanager/newsecret?region=us-east-1" rel="noopener noreferrer"&gt;create a new Secret&lt;/a&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Choose the "Other type of secret" option.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Toggle over to the &lt;code&gt;Plaintext&lt;/code&gt; option under the Key/value pairs tab.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;For the value, enter: &lt;code&gt;{"SLACK_TOKEN":"xoxb-placeholder","GITHUB_TOKEN":"github_pat_placeholder"}&lt;/code&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Click on "Next".&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Name your secret &lt;code&gt;slackbot-deployer&lt;/code&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Proceed by clicking "Next".&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Opt for "No rotation" and click "Next".&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Finally, click on "Store" to save the secret.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Once saved, refreshing the secrets list will show you the secret's details. Look out for the secret ARN, which will appear something like this: &lt;code&gt;arn:aws:secretsmanager:us-east-1:123456789012:secret:slackbot-deployer-XYZ123&lt;/code&gt;. Keep this ARN handy, as we'll integrate it into our CDK App in the subsequent steps.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Deploying the CDK App&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Time to create our CDK App! If you're looking to speed things up, grab the &lt;a href="https://github.com/martzcodes/blog-cicd-slackbot" rel="noopener noreferrer"&gt;example code from my GitHub&lt;/a&gt;. Alternatively, for the DIY enthusiasts, start a CDK project from the ground up. Navigate to your desired project folder and initiate the project with:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;npx cdk init --language typescript

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This command spins up a CDK version 2 project. If you've been working with a globally installed version 1 CDK, it's time to uninstall it and stick with &lt;code&gt;npx&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Once our app is initialized, we're going to make changes to the &lt;code&gt;bin/blog-cicd-slackbot.ts&lt;/code&gt; file to introduce necessary configurations.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const oidcs: Record&amp;lt;string, string&amp;gt; = {
  test: "arn:aws:iam::922113822777:role/GitHubOidcRole",
  prod: "arn:aws:iam::349520124959:role/GitHubOidcRole",
};
const nextEnvs: Record&amp;lt;string, string&amp;gt; = {
  dev: "test",
  test: "prod",
};

const app = new cdk.App();
new BlogCicdSlackbotStack(app, "BlogCicdSlackbotStack", {
  nextEnvs,
  oidcs,
  secretArn: "YOUR SECRET ARN HERE",
});

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This configuration sets the stage for our environment flow and the OIDC Roles, enabling GitHub Actions to deploy CDK applications without saving secrets in GitHub. For a hands-on demonstration on crafting these OIDC roles with CDK, check out the Construct in &lt;code&gt;lib/github-oidc.ts&lt;/code&gt;. While my roles are provisioned externally and are commented out, this should offer you a practical reference.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Create a RestAPI&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Got the starter code? Jump ahead to "Deploy the App". If not, keep reading.&lt;/p&gt;

&lt;p&gt;Let's tweak the &lt;code&gt;lib/blog-cicd-slackbot-stack.ts&lt;/code&gt; file, first defining the Stack's Prop interface to resonate with the bin file. Here, we'll amplify the default &lt;code&gt;StackProps&lt;/code&gt; interface by integrating three additional attributes:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;export interface BlogCicdSlackbotStackProps extends cdk.StackProps {
  nextEnvs: Record&amp;lt;string, string&amp;gt;;
  oidcs: Record&amp;lt;string, string&amp;gt;;
  secretArn: string;
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then, modify the constructor like so:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;constructor(scope: Construct, id: string, props: BlogCicdSlackbotStackProps) {

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Our next move is forging the RestApi and carving out a &lt;code&gt;slack/&lt;/code&gt; resource:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const api = new RestApi(this, "BlogCicdSlackbotApi", {
  deployOptions: {
    dataTraceEnabled: true,
    tracingEnabled: true,
    metricsEnabled: true,
  },
  description: `API for BlogCicdSlackbotApi`,
  endpointConfiguration: {
    types: [EndpointType.REGIONAL],
  },
});
const slackResource = api.root.addResource("slack");

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Let's establish our default lambda environment and properties:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const environment = {
  OIDCS: JSON.stringify(oidcs),
  SECRET_ARN: secret.secretArn,
  NEXT_ENVS: JSON.stringify(nextEnvs),
  TABLE_NAME: table.tableName,
};

const lambdaProps = {
  runtime: Runtime.NODEJS_18_X,
  memorySize: 1024,
  timeout: cdk.Duration.seconds(30),
  environment,
};

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Remember&lt;/strong&gt; : Not all our lambda functions will harness the secret/table and related functionalities. We're only granting access to lambdas if they genuinely require it. Yet, knowledge of the SECRET_ARN is harmless.&lt;/p&gt;

&lt;p&gt;Wrap it up by creating the lambda and its endpoint:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const slackAction = new NodejsFunction(this, "SlackActionFn", {
  entry: "lib/lambda/api/slack-action.ts",
  ...lambdaProps,
});
slackResource.addResource("action").addMethod(
  "POST",
  new LambdaIntegration(slackAction)
);

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Peek into my lambda's handler code present in &lt;code&gt;lib/lambda/api/slack-action.ts&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import { APIGatewayEvent } from "aws-lambda";

export const handler = async (event: APIGatewayEvent) =&amp;gt; {
  const body = JSON.parse(event.body || "{}");
  console.log(JSON.stringify({body}, null, 2));

  return {
    statusCode: 200,
    body: body.challenge,
  };
};

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Heads Up!&lt;/strong&gt; : It's essential to have &lt;code&gt;esbuild&lt;/code&gt; and &lt;code&gt;aws-lambda&lt;/code&gt; types added to your project. Here's how:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;npm i --save-dev esbuild @types/aws-lambda

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This ensures CDK avoids invoking a Docker container for lambda bundling.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Deploy the App&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Everything's set? Deploy the app! Once done, you'll receive the API Url as an Output resembling:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Outputs:
BlogCicdSlackbotStack.BlogCicdSlackbotApiEndpointCDDA7E36 = https://xicnr82c7a.execute-api.us-east-1.amazonaws.com/prod/

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Jot down this URL (inclusive of &lt;code&gt;/prod/&lt;/code&gt;) we'll integrate it into the Slack App's manifest soon.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Gathering the Secrets&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Before diving deep, let's ensure we have all the secrets we need for this project safely tucked away in our placeholder Secret.&lt;/p&gt;

&lt;p&gt;We'll retrieve two primary secrets:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;GitHub Personal Access Token&lt;/strong&gt; : Enables our application to interact with GitHub Actions.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Slack App Bot Token&lt;/strong&gt; : Helps us create and edit Slack messages.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;GitHub Personal Access Token&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;For this, we'll employ GitHub's fine-grained tokens, which offer precise control over permissions.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Head over to the &lt;a href="https://github.com/settings/tokens?type=beta" rel="noopener noreferrer"&gt;GitHub tokens page&lt;/a&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Remember, these tokens come with an expiration which you can set for up to a year.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;If you're working within an organization like me, set the "Resource owner" to your organization.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Grant access to &lt;strong&gt;All repositories&lt;/strong&gt; (both public and private).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Then, ensure you grant the following repository-level access:&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Once you've completed these steps, you can copy your access token. It'll look something like this: &lt;code&gt;github_pat_BLAH&lt;/code&gt;. Keep this safe we'll need it shortly.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Creating a Slack App&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Time to nab the Slack App's Bot token:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Start by visiting the &lt;a href="https://api.slack.com/apps" rel="noopener noreferrer"&gt;Slack API page&lt;/a&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Click on "Create New App".&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Opt to create an app using a manifest:&lt;br&gt;
&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
    "display_information": {
        "name": "deploy-bot"
    },
    "features": {
        "bot_user": {
            "display_name": "deployer",
            "always_online": true
        },
        "slash_commands": [
            {
                "command": "/deployer_add_auth",
                "url": "REPLACE THIS WITH YOUR APIGW URL/slack/add-approver",
                "description": "Add an approver for Deployments",
                "usage_hint": "@user",
                "should_escape": true
            },
            {
                "command": "/deployer_list_auth",
                "url": "REPLACE THIS WITH YOUR APIGW URL/slack/list-approvers",
                "description": "Show who can approve deployments",
                "should_escape": false
            },
            {
                "command": "/deployer_remove_auth",
                "url": "REPLACE THIS WITH YOUR APIGW URL/slack/remove-approver",
                "description": "Remove someone as an approver",
                "usage_hint": "@user",
                "should_escape": true
            }
        ]
    },
    "oauth_config": {
        "scopes": {
            "user": [
                "users.profile:read"
            ],
            "bot": [
                "app_mentions:read",
                "channels:history",
                "chat:write",
                "chat:write.customize",
                "chat:write.public",
                "emoji:read",
                "groups:history",
                "groups:read",
                "groups:write",
                "im:history",
                "im:read",
                "im:write",
                "incoming-webhook",
                "pins:read",
                "pins:write",
                "reactions:read",
                "reactions:write",
                "users:read",
                "users.profile:read",
                "commands"
            ]
        }
    },
    "settings": {
        "event_subscriptions": {
            "request_url": "REPLACE THIS WITH YOUR APIGW URL/slack/action",
            "bot_events": [
                "app_mention"
            ]
        },
        "interactivity": {
            "is_enabled": true,
            "request_url": "REPLACE THIS WITH YOUR APIGW URL/slack/interaction"
        },
        "org_deploy_enabled": false,
        "socket_mode_enabled": false,
        "token_rotation_enabled": false
    }
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Once the app's up and running, install it in your workspace.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Navigate to the &lt;code&gt;OAuth &amp;amp; Permissions&lt;/code&gt; page to fetch the &lt;code&gt;Bot User OAuth Token&lt;/code&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Update the Placeholder Secret&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Armed with both tokens, revisit your placeholder Secret on the AWS Console. Here's what you do:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Click on &lt;code&gt;Retrieve secret value&lt;/code&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Choose &lt;code&gt;Edit&lt;/code&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Enter the tokens in their respective key-value fields.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Don't forget to hit "Save"&lt;/strong&gt; after inputting both tokens.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Get ready, the exciting part is about to start!&lt;/p&gt;

&lt;h2&gt;
  
  
  Setting Up the GitHub Deployment Webhook Integration
&lt;/h2&gt;

&lt;p&gt;Integrating GitHub Deployment Status webhook can be a game-changer. Not only will it ensure timely Slack notifications but also helps in maintaining a reliable history in DynamoDB.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;1. Provisioning a DynamoDB Table&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Firstly, we need to create a table where deployments will be recorded:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const table = new Table(this, "Table", {
  partitionKey: { name: "pk", type: AttributeType.STRING },
  sortKey: { name: "sk", type: AttributeType.STRING },
  billingMode: BillingMode.PAY_PER_REQUEST,
  removalPolicy: cdk.RemovalPolicy.DESTROY,
  timeToLiveAttribute: "ttl",
});

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This table is straightforward. Given that our traffic isn't predictable, opting for the Pay Per Request model ensures cost-effectiveness.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;2. Incorporating the Secret&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Before we proceed, it's essential to incorporate the secret saved from the earlier stages:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const secret = Secret.fromSecretCompleteArn(this, `BlogCicdSlackbotSecret`, secretArn);

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  &lt;strong&gt;3. Lambda &amp;amp; API Gateway Endpoint&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;As theres only one GitHub endpoint, bundling the creation of both the Lambda function and its associated resource streamlines the process. Make sure the Lambda function can access both DynamoDB and Secrets Manager:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const githubWebhookFn = new NodejsFunction(this, "GithubWebhookFn", {
  entry: "lib/lambda/api/github-webhook.ts",
  ...lambdaProps,
});
table.grantReadWriteData(githubWebhookFn);
secret.grantRead(githubWebhookFn);
api.root
  .addResource("github")
  .addMethod("POST", new LambdaIntegration(githubWebhookFn));

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Dive into &lt;code&gt;lib/lambda/api/github-webhook.ts&lt;/code&gt; to examine the logic. While the file may seem hefty, the bulk of it centers around Slack message formatting.&lt;/p&gt;

&lt;h4&gt;
  
  
  - Storing Deployment Details:
&lt;/h4&gt;

&lt;p&gt;We extract vital details from the event to be logged in DynamoDB:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const {
  state: status,
  environment: env,
  created_at: createdAt,
  updated_at: updatedAt,
  target_url: url,
} = body.deployment_status;
const { id: deploymentId, ref: branch, sha } = body.deployment;
const repo = body.repository.name;
const author = body.deployment_status.creator.login;
const owner = body.repository.owner.login;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  - Retrieving Slack Token:
&lt;/h4&gt;

&lt;p&gt;Fetch the Slack token securely from Secrets Manager:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const secret = await sm.send(
  new GetSecretValueCommand({
    SecretId: process.env.SECRET_ARN,
  })
);
const slackToken = JSON.parse(secret.SecretString || "").SLACK_TOKEN;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  - Managing Deployment Statuses:
&lt;/h4&gt;

&lt;p&gt;When a new deployment status hits, it's crucial to determine its relative standing. We do this by fetching the most recent deployment status and comparing the &lt;code&gt;deploymentId&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const pk = `REPO#${repo}#ENV#${env}`.toUpperCase();
const sk = "LATEST";

// get deployment status from dynamodb
const ddbRes = await ddbDocClient.send(
  new GetCommand({
    TableName: process.env.TABLE_NAME,
    Key: {
      pk,
      sk,
    },
  })
);

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If the &lt;code&gt;deploymentId&lt;/code&gt; from the incoming webhook doesn't match the last "LATEST" item, it implies a newer deployment has superseded it. As a result, we should:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Update the previous Slack message by removing its action buttons.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Append a note indicating &lt;code&gt;Automatic rejection by subsequent deployment&lt;/code&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Archive the last item under a unique Sort Key, ensuring its retrievable but not in the immediate queue.&lt;br&gt;
&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const slackRes = await fetch("https://slack.com/api/chat.update", {
  method: "POST",
  body: JSON.stringify({
    channel: "C04KW81UAAV",
    ts: existingItem.slackTs,
    blocks: oldBlocks,
  }),
  headers: {
    "Content-Type": "application/json",
    Authorization: `Bearer ${slackToken}`,
  },
});
await slackRes.json();
await ddbDocClient.send(
  new PutCommand({
    TableName: process.env.TABLE_NAME,
    Item: {
      ...existingItem,
      sk: `DEPLOYMENT#${existingItem.deploymentId}`.toUpperCase(),
      blocks: JSON.stringify(oldBlocks),
    },
  })
);

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;New deployments, on the other hand, engage the &lt;code&gt;chat.postMessage&lt;/code&gt; Slack API, furnishing essential deployment details.&lt;/p&gt;

&lt;p&gt;For deployments matching the "LATEST" &lt;code&gt;deploymentId&lt;/code&gt;, its crucial to ensure the incoming status isnt redundant. Successful deployments headed for another environment get approve/reject action buttons. This updated deployment, now the newest, gets stored under the "LATEST" sort key, paired with the Slack message timestamp for subsequent edits.&lt;/p&gt;

&lt;p&gt;Lastly, successful deployments are logged in a meta item, an inventory of all triumphant deployments segregated by environment:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const envLatest = await ddbDocClient.send(
    new GetCommand({
      TableName: process.env.TABLE_NAME,
      Key: {
        pk: `LATEST`,
        sk: `${nextEnvs[env]}`,
      },
    })
  );
  const updatedRepo = {
    url: item.url,
    sha: item.sha,
    deploymentId: item.deploymentId,
    deployedAt: Date.now(),
    branch: item.branch,
    owner: item.owner,
  };
  if (!envLatest.Item) {
    await ddbDocClient.send(
      new PutCommand({
        TableName: process.env.TABLE_NAME,
        Item: {
          pk: `LATEST`,
          sk: `${nextEnvs[env]}`,
          repos: {
            [repo]: updatedRepo,
          },
        },
      })
    );
  } else {
    // Update existingItem by replacing the repo
    await ddbDocClient.send(
      new UpdateCommand({
        TableName: process.env.TABLE_NAME,
        Key: {
          pk: `LATEST`,
          sk: `${nextEnvs[env]}`,
        },
        // update the repos attribute
        UpdateExpression: "SET repos.#repo = :repo",
        ExpressionAttributeNames: {
          "#repo": repo,
        },
        ExpressionAttributeValues: {
          ":repo": updatedRepo,
        },
      })
    );
  }

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The structuring of this meta item is worth noting. The deployment details nestle within a &lt;code&gt;repos&lt;/code&gt; map in DynamoDB. Leveraging &lt;code&gt;UpdateCommand&lt;/code&gt; helps pinpoint updates to specific repositories, ensuring that new build data doesn't accidentally overwrite unrelated data due to race conditions.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;4. Integrating the GitHub Webhook&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Navigate to your GitHub Organization's settings. Create a fresh webhook with the following attributes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Payload URL&lt;/strong&gt; : &lt;code&gt;your-apigateway-url/github&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Content type&lt;/strong&gt; : &lt;code&gt;application/json&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;SSL Verification&lt;/strong&gt; : Enabled&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Events&lt;/strong&gt; : Specifically opt for &lt;code&gt;Deployment statuses&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Status&lt;/strong&gt; : Active&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Finalize your configurations. Now, every GitHub Deployment activates the Lambda, meticulously processing deployment data in line with your design.&lt;/p&gt;

&lt;h2&gt;
  
  
  Integrating Slack Interactions
&lt;/h2&gt;

&lt;p&gt;Let's enhance the workflow and user experience by setting up Slack interactions for our deployment notifications.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;1. Establishing the Interaction Endpoint&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;To catch Slack interactions like button presses, we need a dedicated endpoint. Update the &lt;code&gt;lib/blog-cicd-slackbot-stack.ts&lt;/code&gt; to introduce this Lambda function and endpoint:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const slackInteractiveFn = new NodejsFunction(this, "SlackInteractiveFn", {
  entry: "lib/lambda/api/slack-interactive.ts",
  ...lambdaProps,
});
table.grantReadWriteData(slackInteractiveFn);
secret.grantRead(slackInteractiveFn);
slackResource
  .addResource("interaction")
  .addMethod("POST", new LambdaIntegration(slackInteractiveFn));

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Though the handler for this function (&lt;code&gt;lib/lambda/api/slack-interactive.ts&lt;/code&gt;) is extensive, it's primarily devoted to processing Slack messages and extracting information from the event.&lt;/p&gt;

&lt;h4&gt;
  
  
  - Decoding the Slack Payload:
&lt;/h4&gt;

&lt;p&gt;Given Slack's use of the x-www-form-urlencoded content type, manual decoding becomes imperative:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const decodedString = decodeURIComponent(event.body!);
const jsonString = decodedString.replace("payload=", "");
const jsonObject = JSON.parse(jsonString);
const message = jsonObject.message;
const approved = jsonObject.actions[0].value === "approved";
const repo = jsonObject.message.text.split("Repo:*\n")[1].split("+")[0];
const env = jsonObject.message.text
  .split("+deployment+to+")[1]
  .split("+by+")[0];
const authority = jsonObject.user.name; // user who did the interaction
const branch = jsonObject.message.text.split("Branch:*\n")[1].split("+")[0];

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  - Fetching Tokens:
&lt;/h4&gt;

&lt;p&gt;This Lambda could require tokens for both GitHub and Slack. Thus, lets retrieve them:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const secret = await sm.send(
  new GetSecretValueCommand({
    SecretId: process.env.SECRET_ARN,
  })
);
const slackToken = JSON.parse(secret.SecretString || "").SLACK_TOKEN;
const githubToken = JSON.parse(secret.SecretString || "").GITHUB_TOKEN;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  - Retrieving User's Image:
&lt;/h4&gt;

&lt;p&gt;To enrich our Slack messages with user details, fetch the user's profile picture via the Slack API:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const slackAuthorityRes = await fetch(
  `https://slack.com/api/users.profile.get?user=${jsonObject.user.id}`,
  {
    method: "GET",
    headers: {
      "Content-Type": "application/json",
      Authorization: `Bearer ${slackToken}`,
    },
  }
);
const slackAuthority = await slackAuthorityRes.json();
const userImg = slackAuthority.profile.image_24;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  - Processing Slack Messages:
&lt;/h4&gt;

&lt;p&gt;A series of operations then follows:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Embed the user's profile picture, corresponding to their action (approve/reject).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Handle vote changes to ensure clarity in responses.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;For deployments to "prod", cross-verify the user's authority against a list of approved users.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;On gaining approval:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Update the LATEST deployment item status&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Retrieve the workflow list for the repository, enabling us to identify the correct &lt;code&gt;workflowId&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Initiate the deployment for the succeeding environment&lt;br&gt;
&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;await ddbDocClient.send(
  new PutCommand({ TableName: process.env.TABLE_NAME, Item: existingItem })
);
const githubListWorkflowsRes = await fetch(
  `https://api.github.com/repos/${existingItem.owner}/${repo}/actions/workflows`,
  {
    method: "GET",
    headers: {
      Accept: "application/vnd.github+json",
      "X-GitHub-Api-Version": "2022-11-28",
      Authorization: `Bearer ${githubToken}`,
    },
  }
);
const { workflows } = await githubListWorkflowsRes.json();
const workflow = workflows.find(
  (workflow: any) =&amp;gt; workflow.name === "deploy-to-env"
);
await fetch(
  `https://api.github.com/repos/${existingItem.owner}/${repo}/actions/workflows/${workflow.id}/dispatches`,
  {
    method: "POST",
    body: JSON.stringify({
      ref: branch,
      inputs: {
        deploy_env: nextEnvs[env],
        oidc_role: oidcs[nextEnvs[env]],
      },
    }),
    headers: {
      Accept: "application/vnd.github+json",
      "X-GitHub-Api-Version": "2022-11-28",
      Authorization: `Bearer ${githubToken}`,
    },
  }
);

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Concluding this segment, the Slack message is refreshed, and the updated information is stored in DynamoDB.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;2. Managing Approvers&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;The next phase involves facilitating the management of deployment approvers via Slack slash commands. Three distinct Lambdas will handle these functionalities addition, removal, and listing of approvers.&lt;/p&gt;

&lt;p&gt;An initial approver sets the tone, and subsequent approvals or removals of approvers happen under their discretion. To glean details about an approver, like their name, email, or profile picture, the &lt;code&gt;add-approver&lt;/code&gt; endpoint uses the Slack Token.&lt;/p&gt;

&lt;p&gt;For those eager to dive into the implementation, the code is hosted in the following:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;lib/lambda/api/slack-add-approver.ts&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;lib/lambda/api/slack-list-approvers.ts&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;lib/lambda/api/slack-remove-approver.ts&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Though we're summarizing this section, remember that having a robust system of approval is paramount, especially for production deployments.&lt;/p&gt;

&lt;h2&gt;
  
  
  Setting Up GitHub Deployments with GitHub Actions
&lt;/h2&gt;

&lt;p&gt;To harness the power of GitHub Deployments, let's configure some environments for your project.&lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;1. Configuring Environments:&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;First, head to your repository's settings page and open the Environments tab. For instance, the URL might resemble: &lt;a href="https://github.com/martzcodes/blog-cicd-slackbot/settings/environments" rel="noopener noreferrer"&gt;&lt;code&gt;https://github.com/martzcodes/blog-cicd-slackbot/settings/environments&lt;/code&gt;&lt;/a&gt;. Here, introduce a fresh environment for each stage you aim to monitor. Ensure their names correspond with your &lt;code&gt;nextEnvs&lt;/code&gt; configuration object. For example, my setup included &lt;code&gt;dev&lt;/code&gt;, &lt;code&gt;test&lt;/code&gt;, and &lt;code&gt;prod&lt;/code&gt;.&lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;2. Configuring GitHub Actions:&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;Now, ensure your deployment-focused GitHub Actions are set to employ GitHub Deployments. Some sample workflows are available at: &lt;a href="https://github.com/martzcodes/blog-cicd-slackbot/tree/main/workflows" rel="noopener noreferrer"&gt;GitHub Sample Workflows&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;pipeline.yml&lt;/code&gt; file, which springs into action upon commits to the &lt;code&gt;main&lt;/code&gt; branch, facilitates continuous deployment to the dev environment. This action sets the stage for our Slack integration. Notably, this pipeline is named &lt;code&gt;Deploy&lt;/code&gt; - a detail the GitHub webhook Lambda verifies.&lt;/p&gt;

&lt;p&gt;Subsequently, the &lt;code&gt;deploy-to-env.yml&lt;/code&gt; workflow is tailored to possess matching inputs. Triggered by the &lt;code&gt;pipeline.yml&lt;/code&gt; workflow, both &lt;code&gt;workflow_call&lt;/code&gt; and &lt;code&gt;workflow_dispatch&lt;/code&gt; triggers accept these inputs:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;inputs:
  deploy_env:
    description: 'Environment to deploy to'
    required: true
    type: string
  oidc_role:
    description: 'OIDC Role to assume for deployment'
    required: true
    type: string

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Although your deployment steps could differ, it's crucial to encapsulate your deployment within status update stages:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  - name: start deployment
    uses: bobheadxi/deployments@v1.2.0
    id: deployment
    with:
      step: start
      env: ${{ inputs.deploy_env }}
  - name: ${{ inputs.deploy_env }} deploy
    env:
      DEPLOY_ENV: ${{ inputs.deploy_env }}
    run: npx cdk deploy --ci --require-approval never --concurrency 5 -v
  - name: update deployment status
    uses: bobheadxi/deployments@v1.2.0
    with:
      step: finish
      status: ${{ job.status }}
      env: ${{ inputs.deploy_env }}
      deployment_id: ${{ steps.deployment.outputs.deployment_id }}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The central idea here is that the &lt;code&gt;bobheadxi/deployments&lt;/code&gt; action communicates with GitHub's API to register a deployment for the relevant environment. For a clearer perspective, a live example resides here: &lt;a href="https://github.com/aws-community-projects/cicd/deployments" rel="noopener noreferrer"&gt;Live GitHub Example&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Live Demonstration
&lt;/h2&gt;

&lt;p&gt;Let's observe this integration in its full glory.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1 - Initialization:&lt;/strong&gt; Using the Slack slash command &lt;code&gt;/deployer_list_auth&lt;/code&gt;, I'll confirm our approver list starts empty:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxwgutf6xsg507r9wno4i.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxwgutf6xsg507r9wno4i.png" width="800" height="123"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2 - Commencing a Deployment:&lt;/strong&gt; I'll initiate a deployment in my dev environment and after a brief wait an &lt;code&gt;in_progress&lt;/code&gt; message surfaces:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F37rgrxk8xmrcihr1tm7z.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F37rgrxk8xmrcihr1tm7z.png" width="800" height="355"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3 - Deployment Completion:&lt;/strong&gt; On the successful completion of the deployment, our message updates, introducing action buttons. Given our next environment, "test", no approvers are mandated:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwegub4blae8hnbt1fp32.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwegub4blae8hnbt1fp32.png" width="800" height="456"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4 - Deployment Approval:&lt;/strong&gt; Tapping the "Approve" button results in another message transformation, indicating approval along with the removal of the action buttons:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3xpdb6pyymkty2kd4214.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3xpdb6pyymkty2kd4214.png" width="800" height="544"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;5 - Test Environment Deployment:&lt;/strong&gt; Shortly, the &lt;code&gt;in_progress&lt;/code&gt; message for the test environment arrives:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frnz9ieinqaji5hevft6t.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frnz9ieinqaji5hevft6t.png" width="800" height="337"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;6 - Rejection Attempt:&lt;/strong&gt; After the test environment has a successful deployment and the buttons appear... As an outsider to the approver list, my "Reject" action prompts an updated message, retaining the buttons:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fro06lnilf0685cflchfo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fro06lnilf0685cflchfo.png" width="800" height="507"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;7 - Adding to the Approver List:&lt;/strong&gt; I'll employ the &lt;code&gt;/deployer_add_auth&lt;/code&gt; command to add myself to the approver list:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkdv7pzh1cbhgjtkddq3y.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkdv7pzh1cbhgjtkddq3y.png" width="568" height="270"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;8 - Final Rejection:&lt;/strong&gt; After clicking the "Reject' button again the deployment is successfully rejected, with our message updated accordingly:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjevx8qcig1fg4n8gwyuq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjevx8qcig1fg4n8gwyuq.png" width="800" height="562"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;The intertwining of continuous integration and deployment with communication tools, as we've explored in this journey, opens a myriad of possibilities. Not only does it seamlessly merge developer actions with team-wide notifications, but it also introduces an additional layer of transparency and control over our deployment processes. The adaptability of the systems we've integrated, namely GitHub and Slack, offer a rich tapestry of features to build upon, as demonstrated by our use of GitHub Deployment and its potential with GitHub Actions.&lt;/p&gt;

&lt;p&gt;It's worth noting the original plan was to leverage &lt;a href="https://docs.github.com/en/actions/deployment/protecting-deployments/creating-custom-deployment-protection-rules" rel="noopener noreferrer"&gt;GitHub Deployment Protection rules&lt;/a&gt;. These rules present a solid framework for controlling deployments in a more granular way. However, a significant limitation arose: their availability is restricted to either public repositories or those operating under GitHub Enterprise. This limitation led to a more creative approach, embedding the essence of what these rules offer, but in a broader context suitable for various repository types.&lt;/p&gt;

&lt;p&gt;To conclude, technology continues to provide tools and platforms, ripe with features and functionalities, waiting to be moulded and interconnected in ways that best suit our needs. This exploration was just a glimpse into the vast world of CI/CD and team communication integrations. As you embark on your own integrative ventures, remember that while off-the-shelf solutions are great, sometimes thinking outside the box or outside the repository, in this case can lead to even more robust and tailor-made solutions for your team.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Amplifying AWS Tutorials: Building a Social Notes App with "Sign in with Apple" and AWS Pinpoint Analytics</title>
      <dc:creator>Matt Martz</dc:creator>
      <pubDate>Tue, 01 Aug 2023 00:24:33 +0000</pubDate>
      <link>https://forem.com/aws-builders/amplifying-aws-tutorials-building-a-social-notes-app-with-sign-in-with-apple-and-aws-pinpoint-analytics-267l</link>
      <guid>https://forem.com/aws-builders/amplifying-aws-tutorials-building-a-social-notes-app-with-sign-in-with-apple-and-aws-pinpoint-analytics-267l</guid>
      <description>&lt;p&gt;For the 2023 &lt;a href="https://aws.amazon.com/pm/amplify/?sc_channel=el&amp;amp;trk=bc603709-686b-4e27-b79f-07e5de3686ec" rel="noopener noreferrer"&gt;AWS Amplify&lt;/a&gt; + &lt;a href="https://hashnode.com/?source=aws-amplify-2023" rel="noopener noreferrer"&gt;Hashnode&lt;/a&gt; Hackathon, I wanted to take a closer look into iOS development and take the &lt;a href="https://aws.amazon.com/getting-started/hands-on/build-ios-app-amplify/" rel="noopener noreferrer"&gt;introductory iOS notes app from AWS Amplify's tutorial&lt;/a&gt; to the next level by incorporating powerful features like federated login via &lt;a href="https://aws.amazon.com/blogs/mobile/federating-users-using-sign-in-with-apple-and-aws-amplify-for-swift/" rel="noopener noreferrer"&gt;Apple's "Sign in with Apple"&lt;/a&gt; Then we'll add some easy Analytics to our app with AWS Pinpoint.&lt;/p&gt;

&lt;p&gt;While AWS's initial tutorial covered basic username/password authentication... users appreciate convenience and security. To support this, we'll add "Sign in with Apple" support, which offers users a seamless and privacy-focused login option. With "Sign in with Apple," users can authenticate with their Apple ID and stay in control of their personal information (including giving them the ability to hide their email).&lt;/p&gt;

&lt;p&gt;A successful app requires understanding user behavior and optimizing user experiences. That's where AWS Pinpoint Analytics comes into play. I'll demonstrate how to integrate AWS Pinpoint into the notes app, enabling us to collect critical user engagement data. With this newfound insight, we can analyze user interactions, monitor feature adoption, and make data-driven decisions to enhance the app's performance.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;AWS Amplify is a comprehensive development platform offered by Amazon Web Services (AWS) that simplifies the process of building web and mobile applications. It provides developers with a set of tools, services, and libraries to accelerate the development of cloud-powered applications.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;This tutorial assumes you are familiar with the basics of AWS Amplify, including authentication setup and working with the Amplify Storage component. &lt;strong&gt;If you're new to these concepts, I recommend checking out the&lt;/strong&gt; &lt;a href="https://aws.amazon.com/getting-started/hands-on/build-ios-app-amplify" rel="noopener noreferrer"&gt;&lt;strong&gt;original AWS Amplify tutorial&lt;/strong&gt;&lt;/a&gt; &lt;strong&gt;to get up to speed.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The code for this project is located here: &lt;a href="https://github.com/martzcodes/blog-amplifyhackathon-ios-2023" rel="noopener noreferrer"&gt;https://github.com/martzcodes/blog-amplifyhackathon-ios-2023&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Updating the App to use "Sign in with Apple"
&lt;/h2&gt;

&lt;p&gt;Amplify and Apple make it really easy to incorporate "Sign in with Apple" (SIWA) into your applications. SIWA allows you to federate users into an Amazon Cognito identity pool using the AWS Amplify Libraries for Swift. By federating a user into a Cognito identity pool, it allows us to use temporary AWS IAM credentials for the users based on the identity token that Apple provides us. In doing so, we can use those AWS IAM credentials to access other services like Amazon S3, AppSync and Pinpoint.&lt;/p&gt;

&lt;p&gt;This article will show you how to use Sign in with Apple (SIWA) to retrieve an identity token and federate the user in an Amazon Cognito identity pool using the AWS Amplify Libraries for Swift. Federating a user in an identity pool provides them with credentials that allow them to access other services like Amazon S3 and Amazon DynamoDB.&lt;/p&gt;

&lt;p&gt;First, we'll update amplify's auth to use Apple's provider. Below is the full list of answers when running &lt;code&gt;amplify update auth&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ amplify update auth
 What do you want to do? Walkthrough all the auth configurations
 Select the authentication/authorization services that you want to use: User Sign-Up, Sign-In, connected with AWS IAM controls (Enables per-user Storage features for images or other content, Analytics, and more)
 Allow unauthenticated logins? (Provides scoped down permissions that you can control via AWS IAM) Yes
 Do you want to enable 3rd party authentication providers in your identity pool? Yes
 Select the third party identity providers you want to configure for your identity pool: Apple

 You've opted to allow users to authenticate via Sign in with Apple. If you haven't already, you'll need to go to https://developer.apple.com/account/#/welcome and configure Sign in with Apple.

 Enter your Bundle Identifier for your identity pool: &amp;lt;your app bundle id from apple&amp;gt;
 Do you want to add User Pool Groups? No
 Do you want to add an admin queries API? No
 Multifactor authentication (MFA) user login options: OFF
 Email based user registration/forgot password: Enabled (Requires per-user email entry at registration)
 Specify an email verification subject: Your verification code
 Specify an email verification message: Your verification code is {####}
 Do you want to override the default password policy for this User Pool? No
 Specify the app's refresh token expiration period (in days): 30
 Do you want to specify the user attributes this app can read and write? No
 Do you want to enable any of the following capabilities?
 Do you want to use an OAuth flow? No
? Do you want to configure Lambda Triggers for Cognito? Yes
? Which triggers do you want to enable for Cognito

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Identity pools do NOT use Cognito User Groups... they use AWS IAM-based access. In order to make use of these for our app, we need to switch AWS AppSync to use IAM auth instead of Cognito User Groups. Let's run &lt;code&gt;amplify update api&lt;/code&gt;. Below are the full answers for this section:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;amplify update api
? Select from one of the below mentioned services: GraphQL
...
? Select a setting to edit Authorization modes
? Choose the default authorization type for the API IAM
? Configure additional auth types? No

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Before we can push these changes, we need to update our &lt;code&gt;schema.graphql&lt;/code&gt;. It was using auth based on Cognito Pools, but we need to switch it to IAM-based auth. For demo sake, we also want to enable guests (unauthenticated users) to also read notes. We'll update the &lt;code&gt;schema.graphql&lt;/code&gt; file to:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;type NoteData
@model
@auth(
    rules: [
      { allow: public, provider: iam, operations: [read] }
      { allow: private, provider: iam, operations: [read, create, update, delete] }
    ]
  ) {
    id: ID!
    name: String!
    description: String
    image: String
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;{ allow: public, provider: iam, operations: [read] }&lt;/code&gt; gives guests read access&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;{ allow: private, provider: iam, operations: [read, create, update, delete] }&lt;/code&gt; gives logged in users full CRUD access&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;With the &lt;code&gt;schema.graphql&lt;/code&gt; updated, we can run &lt;code&gt;amplify codegen models&lt;/code&gt; to regenerate the APIs.&lt;/p&gt;

&lt;p&gt;With these changes, we can run &lt;code&gt;amplify push&lt;/code&gt; and our backend will update to reflect these new changes. Next, we'll need to modify some of the swift code.&lt;/p&gt;

&lt;p&gt;Instead of using &lt;code&gt;Amplify.Auth.signInWithWebUI&lt;/code&gt; we need to use the &lt;code&gt;SignInWithApple&lt;/code&gt; button and then make a call to &lt;code&gt;federateToIdentityPool&lt;/code&gt;. &lt;code&gt;SignInWithApple&lt;/code&gt; is a capability provided by Apple. Once signed in, it provides an identity token that is used by &lt;code&gt;federateToIdentityPool&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;In &lt;code&gt;ContentView.swift&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// add this to the top of the file
import AuthenticationServices

// In the ContentView view:
func configureRequest(_ request: ASAuthorizationAppleIDRequest) {
    request.requestedScopes = [.email]
}
func handleResult(_ result: Result&amp;lt;ASAuthorization, Error&amp;gt;) {
    switch result {
    case .success(let authorization):
        guard let credential = authorization.credential as? ASAuthorizationAppleIDCredential,
                let identityToken = credential.identityToken else {
                    return
                }
        guard let tokenString = String(data: identityToken, encoding: .utf8) else {
            return
        }
        Backend.shared.federateToIdentityPools(with: tokenString)
        self.userData.isSignedIn = true;
    case .failure(let error):
        print(error)
    }
}

// replace the original sign in button with:
SignInWithAppleButton(
    onRequest: configureRequest,
    onCompletion: handleResult
)
.frame(maxWidth: 300, maxHeight: 45)

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;SignInWithAppleButton&lt;/code&gt; triggers the SIWA capability by Apple. &lt;code&gt;configureRequest&lt;/code&gt; ensures that the email is returned as part of the scope of the &lt;code&gt;identityToken&lt;/code&gt;. On completion then parses the &lt;code&gt;identityToken&lt;/code&gt; and sends it to our &lt;code&gt;Backend.swift&lt;/code&gt; service.&lt;/p&gt;

&lt;p&gt;In &lt;code&gt;Backend.swift&lt;/code&gt; the &lt;code&gt;Amplify.Auth&lt;/code&gt; plugin calls &lt;code&gt;federateToIdentityPool&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;func federateToIdentityPools(with tokenString: String) {
    guard
        let plugin = try? Amplify.Auth.getPlugin(for: "awsCognitoAuthPlugin") as? AWSCognitoAuthPlugin
    else { return }

    Task {
        do {
            let result = try await plugin.federateToIdentityPool(
                withProviderToken: tokenString,
                for: .apple
            )
            print("Successfully federated user to identity pool with result:", result)
        } catch {
            print("Failed to federate to identity pool with error:", error)
        }
    }
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In this version of our code, we also have an auth listener that triggers UI updates based on session events. The auth listener in our &lt;code&gt;Backend.swift&lt;/code&gt; file looks like this now:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;public func listenAuthUpdate() async -&amp;gt; AsyncStream&amp;lt;AuthStatus&amp;gt; { 
    return AsyncStream { continuation in
        continuation.onTermination = { @Sendable status in
                   print("[BACKEND] streaming auth status terminated with status : \(status)")
        }

        // listen to auth events.
        // see https://github.com/aws-amplify/amplify-ios/blob/master/Amplify/Categories/Auth/Models/AuthEventName.swift
        let _ = Amplify.Hub.listen(to: .auth) { payload in            
            print(payload.eventName)
            switch payload.eventName {
            case "Auth.federatedToIdentityPool":
                print("User federated, update UI")
                continuation.yield(AuthStatus.signedIn)
                Task {
                    await self.updateUserData(withSignInStatus: true)
                }
            case "Auth.federationToIdentityPoolCleared":
                print("User unfederated, update UI")
                continuation.yield(AuthStatus.signedOut)
                Task {
                    await self.updateUserData(withSignInStatus: false)
                }
            case HubPayload.EventName.Auth.sessionExpired:
                print("Session expired, show sign in aui")
                continuation.yield(AuthStatus.sessionExpired)
                Task {
                    await self.updateUserData(withSignInStatus: false)
                }
            default:
                print("\(payload)")
                break
            }
        }
    }
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For federation events, the events end up being &lt;code&gt;Auth.federatedToIdentityPool&lt;/code&gt; and &lt;code&gt;Auth.federationToIdentityPoolCleared&lt;/code&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Adding AWS Pinpoint for App Analytics
&lt;/h2&gt;

&lt;p&gt;Next, we want to see what our users are doing in our Notes app and see if they're really making use of the "new" image upload feature.&lt;/p&gt;

&lt;p&gt;First, we'll need to update amplify by running &lt;code&gt;amplify add analytics&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;amplify add analytics
? Select an Analytics provider Amazon Pinpoint
 Provide your pinpoint resource name: amplifypushup

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then we'll run &lt;code&gt;amplify push&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;We'll need to &lt;a href="https://docs.amplify.aws/lib/analytics/getting-started/q/platform/ios/#view-analytics-console" rel="noopener noreferrer"&gt;add the Amplify Analytics libraries to our app&lt;/a&gt; by making sure the &lt;code&gt;AWSPinpointAnalyticsPlugin&lt;/code&gt; is installed. And then we'll add the initialization in our Backend.swift file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;private init() {
  // initialize amplify
  do {
      try Amplify.add(plugin: AWSCognitoAuthPlugin())
      try Amplify.add(plugin: AWSAPIPlugin(modelRegistration: AmplifyModels()))
      try Amplify.add(plugin: AWSS3StoragePlugin())
      try Amplify.add(plugin: AWSPinpointAnalyticsPlugin()) // &amp;lt;-- add this
      try Amplify.configure()
      print("Initialized Amplify");
  } catch {
    print("Could not initialize Amplify: \(error)")
  }

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If we want to track how popular our image upload feature is, we can add an event:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;let properties: AnalyticsProperties = [
    "eventPropertyStringKey": "eventPropertyStringValue",
    "eventPropertyIntKey": 123,
    "eventPropertyDoubleKey": 12.34,
    "eventPropertyBoolKey": true
]

let event = BasicAnalyticsEvent(
    name: "imageUploaded",
    properties: properties
)

try Amplify.Analytics.record(event: event)

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can also emit events related to authentication that Pinpoint will automatically track:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;_userauth.sign_in&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;_userauth.sign_up&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;_userauth.auth_fail&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;In my quest to elevate the capabilities of AWS Amplify tutorials, I ventured into iOS development for the 2023 &lt;a href="https://aws.amazon.com/pm/amplify/?sc_channel=el&amp;amp;trk=bc603709-686b-4e27-b79f-07e5de3686ec" rel="noopener noreferrer"&gt;AWS Amplify&lt;/a&gt; + &lt;a href="https://hashnode.com/?source=aws-amplify-2023" rel="noopener noreferrer"&gt;Hashnode&lt;/a&gt; Hackathon. Building upon the introductory iOS notes app tutorial provided by AWS Amplify, I took the app to the next level by incorporating sought-after features like "Sign in with Apple" and AWS Pinpoint Analytics.&lt;/p&gt;

&lt;p&gt;The inclusion of "Sign in with Apple" as a federated login option significantly enhances the app's authentication experience. Users can now seamlessly log in with their Apple ID, ensuring both convenience and privacy. This advanced authentication option empowers users to stay in control of their personal information, including the ability to hide their email.&lt;/p&gt;

&lt;p&gt;In addition to empowering users with a better way to sign-in, I integrated AWS Pinpoint Analytics into the notes app. By leveraging AWS Pinpoint's analytical capabilities, we gained valuable insights into user engagement and behavior. This data-driven approach allows us to analyze user interactions, monitor feature adoption, and make informed decisions to optimize the app's performance and user experience.&lt;/p&gt;

&lt;p&gt;As you explore the code and implement these advanced features, I encourage you to further experiment and customize the app to suit your specific use case and preferences. Remember to refer to the &lt;a href="https://github.com/martzcodes/blog-amplifyhackathon-ios-2023" rel="noopener noreferrer"&gt;code repository&lt;/a&gt; for detailed implementations.&lt;/p&gt;

&lt;p&gt;Thank you for joining me on this amplified journey to enhance AWS tutorials. With "Sign in with Apple" and AWS Pinpoint Analytics at your fingertips, you're equipped to build robust and engaging mobile apps that connect users with seamless authentication and actionable insights.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Leveraging CDK and Serverless for Bluesky Feed Generation</title>
      <dc:creator>Matt Martz</dc:creator>
      <pubDate>Wed, 05 Jul 2023 19:04:11 +0000</pubDate>
      <link>https://forem.com/aws-builders/leveraging-cdk-and-serverless-for-bluesky-feed-generation-52g9</link>
      <guid>https://forem.com/aws-builders/leveraging-cdk-and-serverless-for-bluesky-feed-generation-52g9</guid>
      <description>&lt;p&gt;In the dynamic world of social media, Bluesky is &lt;a href="https://www.wired.com/story/bluesky-my-feeds-custom-algorithms/" rel="noopener noreferrer"&gt;carving out a niche&lt;/a&gt; with its innovative approach to content curation. With its unique "My Feeds" feature, Bluesky empowers users to customize their social media experience by choosing from a variety of feeds, each powered by a different algorithm.&lt;/p&gt;

&lt;p&gt;Creating these diverse feeds, however, requires a &lt;a href="https://github.com/bluesky-social/feed-generator" rel="noopener noreferrer"&gt;Bluesky feed generator&lt;/a&gt;, a tool that necessitates some technical know-how. This is where AWS services come into the picture, simplifying the deployment and operation of a Bluesky feed generator.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0pe244rdmqjpog3b4860.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0pe244rdmqjpog3b4860.png" width="472" height="239"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Our proposed architecture leverages AWS Fargate, AWS Lambda, and Aurora Serverless. AWS Fargate runs a container designed to parse the Bluesky event stream, capturing only the relevant data. This data is then stored in an Aurora Serverless database. AWS Lambda is employed to deliver the stream, working in tandem with AWS Fargate to reduce operational overhead and allow us to focus on core functionality. This combination also serves as our "feed algorithm".&lt;/p&gt;

&lt;p&gt;Deploying the Bluesky feed generator as an AWS CDK project not only streamlines the process of feed creation but also democratizes it, making it accessible to a wider audience. It facilitates easy scaling of resources, and efficient cost management, and enables us to focus more on developing unique, user-centric feeds rather than managing the underlying infrastructure.&lt;/p&gt;

&lt;p&gt;In the upcoming sections of this blog post, we'll delve deeper into the technical aspects of deploying a Bluesky feed generator using AWS services. We'll provide a step-by-step guide to help you get started, including creating a &lt;a href="https://bsky.app/profile/did:plc:a62mzn6xxxxwktpdprw2lvnc/feed/aws-community" rel="noopener noreferrer"&gt;feed of "skeets" from AWS Employees, AWS Heroes, and AWS Community Builders&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;PSST... feel free to&lt;/em&gt; &lt;a href="https://bsky.app/profile/martz.codes" rel="noopener noreferrer"&gt;&lt;em&gt;follow me on Bluesky too&lt;/em&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  The Bluesky-provided Feed Generator
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://github.com/bluesky-social/feed-generator" rel="noopener noreferrer"&gt;Bluesky provides a basic feed generator on their GitHub&lt;/a&gt;, but it's a bit like having a camera without the right settings - it lacks the necessary architecture to capture the perfect shot.&lt;/p&gt;

&lt;p&gt;At its heart, the service provided by Bluesky's repo performs three key functions:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;It latches onto the Bluesky Websocket stream and filters events into a database. It's like adjusting the focus on your camera, ensuring only the relevant subjects are in clear view &lt;a href="https://github.com/bluesky-social/feed-generator/blob/main/src/subscription.ts#L8" rel="noopener noreferrer"&gt;1&lt;/a&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;It features a feed endpoint that cherry-picks relevant rows from the database. Think of it as the photographer, selecting the best shots for the final album &lt;a href="https://github.com/bluesky-social/feed-generator/blob/main/src/algos/whats-alf.ts#L8" rel="noopener noreferrer"&gt;2&lt;/a&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;It includes a static endpoint that "registers" the feed. This is like the metadata of a photo, providing all the necessary information about the shot &lt;a href="https://github.com/bluesky-social/feed-generator/blob/main/src/well-known.ts#L11" rel="noopener noreferrer"&gt;3&lt;/a&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;To register the feed service, a script is run that's a bit like the final editing process before the photos are published. It connects to Bluesky and registers the feed name and service URL, making sure everything is picture-perfect &lt;a href="https://github.com/bluesky-social/feed-generator/blob/main/scripts/publishFeedGen.ts" rel="noopener noreferrer"&gt;4&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Now, here's where we bring in the big guns - AWS CDK. We're going to give this feed generator a major upgrade.&lt;/p&gt;

&lt;p&gt;Firstly, we'll move the WebSocket stream connection to a Fargate service. This is akin to upgrading from a manual focus to an automatic one - it's faster, more efficient, and doesn't require as much manual effort.&lt;/p&gt;

&lt;p&gt;Next, we'll transform the feed endpoint into an AWS Lambda function. This is like having an automated photo editor - it's more efficient, scalable, and doesn't require constant supervision.&lt;/p&gt;

&lt;p&gt;The static endpoint will be relocated to a simple MockIntegration in the APIGateway. This is like moving your photo metadata management to a digital platform - it's more efficient, reliable, and easily accessible.&lt;/p&gt;

&lt;p&gt;Lastly, we'll shift the feed registration script to be run by a CustomResource-invoked Lambda. This is like automating your final photo editing process - it's more reliable, efficient, and doesn't require constant attention.&lt;/p&gt;

&lt;p&gt;In essence, we're taking the basic structure provided by Bluesky and supercharging it with the power of AWS CDK.&lt;/p&gt;

&lt;h2&gt;
  
  
  Crafting The Bluesky Database
&lt;/h2&gt;

&lt;p&gt;First up on our agenda is the creation of the database that will serve as the meeting point for our parser and feed. This setup bears a resemblance to my recent post about &lt;a href="https://dev.to/martzcodes/creating-an-aurora-mysql-database-and-setting-up-a-kinesis-cdc-stream-with-aws-cdk-1cd1-temp-slug-7603700"&gt;Creating an Aurora MySQL Database and Setting Up a Kinesis CDC Stream with AWS CDK&lt;/a&gt;. However, there's a twist - we won't be needing the stream this time, and we'll be employing Aurora Serverless instead.&lt;/p&gt;

&lt;p&gt;Our game plan involves crafting a CDK Construct that houses the database and a CustomResource-invoked Lambda that will lay the groundwork for the database schema. Let's dive into the creation of the &lt;a href="https://github.com/aws-community-projects/bluesky-feed-generator/blob/main/lib/bluesky-db.ts" rel="noopener noreferrer"&gt;database&lt;/a&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;this.db = new ServerlessCluster(this, 'cluster', {
  clusterIdentifier: `bluesky`,
  credentials: Credentials.fromGeneratedSecret('admin'),
  defaultDatabaseName: dbName,
  engine: DatabaseClusterEngine.AURORA_MYSQL,
  removalPolicy: RemovalPolicy.DESTROY,
  enableDataApi: true,
  vpc,
});

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Next, we'll whip up the lambda function and the custom resource:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const dbInitFn = new NodejsFunction(this, "dbInitFn", {
  functionName: "bluesky-db-init",
  entry: join(__dirname, "lambda/db-init.ts"),
  runtime: Runtime.NODEJS_18_X,
  timeout: Duration.minutes(15),
  tracing: Tracing.ACTIVE,
  environment: {
    DB_NAME: dbName,
    CLUSTER_ARN: this.db.clusterArn,
    SECRET_ARN: this.db.secret?.secretArn || '',
  },
});
this.db.grantDataApiAccess(dbInitFn);

const initProvider = new Provider(this, `init-db-provider`, {
  onEventHandler: dbInitFn,
});

new CustomResource(this, `init-db-resource`, {
  serviceToken: initProvider.serviceToken,
});

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Our lambda's &lt;a href="https://github.com/aws-community-projects/bluesky-feed-generator/blob/main/lib/lambda/db-init.ts" rel="noopener noreferrer"&gt;handler&lt;/a&gt; will be interacting with the database using the RDS Data API, taking advantage of the secrets that CloudFormation has created for the database:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;if (event.RequestType === "Create") {
    await client.send(cmd(`select 1`));
    console.log("db-init: create tables");
    await client.send(
      cmd(
        `CREATE TABLE IF NOT EXISTS post (uri VARCHAR(255) NOT NULL, cid VARCHAR(255) NOT NULL, author VARCHAR(255) NOT NULL, replyParent VARCHAR(255), replyRoot VARCHAR(255), indexedAt DATETIME NOT NULL, PRIMARY KEY (uri));`
      )
    );
    console.log("created post table");
    await client.send(cmd(`CREATE TABLE IF NOT EXISTS sub_state (service VARCHAR(255) NOT NULL, cursor_value INT NOT NULL, PRIMARY KEY (service));`))
    console.log("created sub_state table");
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Since we want the tables to be created only when the stack is first deployed, we filter on &lt;code&gt;event.RequestType === "Create"&lt;/code&gt;. However, to err on the side of caution, we've also included &lt;code&gt;IF NOT EXISTS&lt;/code&gt; in the SQL commands. Better safe than sorry!&lt;/p&gt;

&lt;h2&gt;
  
  
  Constructing the Parser: Connecting to the Bluesky Event Stream
&lt;/h2&gt;

&lt;p&gt;In the next phase of our process, we're going to construct the parser that connects to the Bluesky event stream. This component is the linchpin that connects us to the publicly available Bluesky web socket event stream. It's like the lens of our camera, capturing the events that we're interested in and focusing them into a coherent image.&lt;/p&gt;

&lt;p&gt;Let's dive into the &lt;a href="https://github.com/aws-community-projects/bluesky-feed-generator/blob/main/lib/bluesky-parser.ts" rel="noopener noreferrer"&gt;code&lt;/a&gt; and understand the key parts:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const cluster = new Cluster(this, "bluesky-feed-generator-cluster", {
  vpc,
  enableFargateCapacityProviders: true,
});

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here, we're setting up our Fargate cluster. This is akin to positioning our camera on a tripod, providing a stable platform for our operations.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const taskDefinition = new FargateTaskDefinition(
  this,
  "bluesky-feed-generator-task",
  {
    runtimePlatform: {
      cpuArchitecture: CpuArchitecture.ARM64,
    },
    memoryLimitMiB: 1024,
    cpu: 512,
  }
);
db.grantDataApiAccess(taskDefinition.taskRole);

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Next, we're defining a Fargate task. This is like adjusting the camera's settings, such as aperture and shutter speed, to ensure we capture the best possible shot. We're also granting this task access to our database, much like giving our camera the ability to store the photos it captures.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const logging = new AwsLogDriver({
  logRetention: RetentionDays.ONE_DAY,
  streamPrefix: "bluesky-feed-generator",
});

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We're setting up a log driver here, which is akin to the camera's viewfinder, allowing us to monitor and review our operations.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;taskDefinition.addContainer("bluesky-feed-parser", {
  logging,
  image: ContainerImage.fromDockerImageAsset(
    new DockerImageAsset(this, "bluesky-feed-parser-img", {
      directory: join(__dirname, ".."),
      platform: Platform.LINUX_ARM64,
    })
  ),
  environment: {
    DB_NAME: dbName,
    CLUSTER_ARN: db.clusterArn,
    SECRET_ARN: db.secret?.secretArn || '',
  }
});

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here, we're adding a container to our task definition. This is like attaching a lens to our camera, defining what it will capture. We're also specifying the environment variables, which are akin to the camera's internal settings.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;new FargateService(this, "bluesky-feed-generator", {
  cluster,
  taskDefinition,
  enableExecuteCommand: true,
  // fargate service needs to select subnets with the NAT in order to access AWS services
  vpcSubnets: {
    subnetType: SubnetType.PRIVATE_WITH_EGRESS,
  },
  securityGroups: [securityGroup]
});

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Finally, we're creating a new Fargate service, which is like pressing the camera's shutter button, setting everything into motion. We're specifying that the service should be able to execute commands and that it should select subnets with NAT to access AWS services, ensuring our camera can communicate with the outside world.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Service Code
&lt;/h3&gt;

&lt;p&gt;The &lt;a href="https://github.com/aws-community-projects/bluesky-feed-generator/tree/main/bluesky-feed-parser" rel="noopener noreferrer"&gt;code&lt;/a&gt; for our Fargate Service is heavily inspired by the feed generator provided by Bluesky. However, we've stripped out all the unnecessary parts (like express, etc). All we need to do is connect to the WebSocket stream and save relevant events to the already-created database.&lt;/p&gt;

&lt;p&gt;When we created the &lt;code&gt;DockerImageAsset&lt;/code&gt; above, we pointed it to our root directory which contains a &lt;a href="https://github.com/aws-community-projects/bluesky-feed-generator/blob/main/Dockerfile" rel="noopener noreferrer"&gt;&lt;code&gt;Dockerfile&lt;/code&gt;&lt;/a&gt;. This &lt;a href="https://github.com/aws-community-projects/bluesky-feed-generator/blob/main/Dockerfile" rel="noopener noreferrer"&gt;&lt;code&gt;Dockerfile&lt;/code&gt;&lt;/a&gt; installs the dependencies and runs the app which &lt;a href="https://github.com/aws-community-projects/bluesky-feed-generator/blob/main/bluesky-feed-parser/app.ts" rel="noopener noreferrer"&gt;creates a FirehoseSubscription&lt;/a&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;for (const post of ops.posts.creates) {
  if (awsCommunityDids.includes(post.author)) {
    const user = awsCommunityDidsToKeys[post.author];
    console.log(`${user} posted ${post.record.text}`);
    postsToCreate.push({
      uri: post.uri,
      cid: post.cid,
      author: user,
      replyParent: post.record?.reply?.parent.uri ?? null,
      replyRoot: post.record?.reply?.root.uri ?? null,
      indexedAt: new Date().toISOString().slice(0, 19).replace("T", " "),
    });
  }
}

if (postsToCreate.length &amp;gt; 0) {
  console.log(JSON.stringify({ postsToCreate }));
  const insertSql = `INSERT INTO post (uri, cid, author, replyParent, replyRoot, indexedAt) VALUES ${postsToCreate
    .map(
      () =&amp;gt; "(:uri, :cid, :author, :replyParent, :replyRoot, :indexedAt)"
    )
    .join(", ")} ON DUPLICATE KEY UPDATE uri = uri`;

  const insertParams = postsToCreate.flatMap((post) =&amp;gt; [
    { name: "uri", value: { stringValue: post.uri } },
    { name: "cid", value: { stringValue: post.cid } },
    { name: "author", value: { stringValue: post.author } },
    {
      name: "replyParent",
      value: post.replyParent
        ? { stringValue: post.replyParent }
        : { isNull: true },
    },
    {
      name: "replyRoot",
      value: post.replyRoot
        ? { stringValue: post.replyRoot }
        : { isNull: true },
    },
    { name: "indexedAt", value: { stringValue: post.indexedAt } },
  ]);

  const insertCmd = cmd(insertSql, insertParams);
  await client.send(insertCmd);
  console.log(`Created ${postsToCreate.length} posts`);
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is akin to the post-processing phase in photography. We're selecting the best shots (or in this case, posts) based on our predefined criteria, and storing them in our database. We're also ensuring that we handle deletions appropriately, keeping our collection of shots up-to-date.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Bluesky Feed
&lt;/h2&gt;

&lt;p&gt;The &lt;a href="https://github.com/aws-community-projects/bluesky-feed-generator/blob/main/lib/bluesky-feed.ts" rel="noopener noreferrer"&gt;final CDK Construct&lt;/a&gt; creates an API Gateway with two endpoints, and the CustomResource that registers the feed.&lt;/p&gt;

&lt;h3&gt;
  
  
  Creating the Feed
&lt;/h3&gt;

&lt;p&gt;This section of the code is primarily concerned with setting up the API Gateway and the DNS records for the service. Here are the key parts:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const hostedzone = HostedZone.fromLookup(this, "hostedzone", {
  domainName: zoneDomain,
});
const certificate = new Certificate(this, "certificate", {
  domainName,
  validation: CertificateValidation.fromDns(hostedzone),
});

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here, we're looking up the hosted zone for our domain and creating a certificate for it. This is like setting up the address and credentials for our online photo gallery.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const api = new RestApi(this, "RestApi", {
  defaultMethodOptions: {
    methodResponses: [{ statusCode: "200" }],
  },
  deployOptions: {
    tracingEnabled: true,
    metricsEnabled: true,
    dataTraceEnabled: true,
  },
  endpointConfiguration: {
    types: [EndpointType.REGIONAL],
  },
});

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Next, we're setting up the REST API. This is like setting up the interface for our online gallery, defining how users will interact with it.&lt;/p&gt;

&lt;p&gt;Of course, let's not forget the MockIntegration:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const didIntegration = new MockIntegration(didOptions);
const didResource = api.root
  .addResource(".well-known")
  .addResource("did.json");
didResource.addMethod("GET", didIntegration, {
  methodResponses: [
    {
    statusCode: "200",
    },
  ],
});

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here, we're setting up a MockIntegration. In the context of AWS API Gateway, a MockIntegration is a type of integration that allows you to simulate API behavior without implementing any backend logic. It's like a placeholder or a dummy that returns pre-configured responses to requests.&lt;/p&gt;

&lt;p&gt;In this case, we're using it to serve a static JSON response for the did.json endpoint under the .well-known path. This endpoint is typically used to provide a standard way to discover information about the domain, and in this case, it's providing information about the Bluesky feed generator service.&lt;/p&gt;

&lt;p&gt;This is akin to having a static information page in our photo gallery that provides details about the gallery itself. It doesn't change or interact with the visitor but provides essential information for anyone who asks.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const feedFn = new NodejsFunction(this, "feed", {
  functionName: "bluesky-feed",
  entry: join(__dirname, "lambda/feed.ts"),
  runtime: Runtime.NODEJS_18_X,
  timeout: Duration.seconds(30),
  tracing: Tracing.ACTIVE,
  environment: {
    DB_NAME: dbName,
    CLUSTER_ARN: db.clusterArn,
    SECRET_ARN: db.secret?.secretArn || "",
  },
});
db.grantDataApiAccess(feedFn);

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here, we're defining a new Node.js function that will serve as the feed for our service. This is like setting up the mechanism that will display the photos in our gallery.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;api.root
  .addResource("xrpc")
  .addResource("app.bsky.feed.getFeedSkeleton")
  .addMethod("GET", feedIntegration);

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is where we're defining the endpoint for our feed. This is like setting up the URL where users can view our photo gallery.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;api.addDomainName(`Domain`, {
  domainName,
  certificate,
  securityPolicy: SecurityPolicy.TLS_1_2,
});
new ARecord(scope, `ARecord`, {
  zone: hostedzone,
  recordName: domainName,
  target: RecordTarget.fromAlias(new ApiGateway(api)),
});

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Finally, we're associating our domain name with our API and creating an A record for it. This is like linking our online gallery to our chosen web address, making it accessible to the public.&lt;/p&gt;

&lt;h3&gt;
  
  
  Registering the feed
&lt;/h3&gt;

&lt;p&gt;We create another lambda and custom resource with:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const publishSecret = Secret.fromSecretCompleteArn(this, "publish-secret", props.publishFeed.blueskySecretArn);
const publishFeedFn = new NodejsFunction(this, "publish-feed", {
  functionName: "bluesky-publish-feed",
  entry: join(__dirname, "lambda/publish-feed.ts"),
  runtime: Runtime.NODEJS_18_X,
  timeout: Duration.seconds(30),
  tracing: Tracing.ACTIVE,
  environment: {
    HANDLE: props.publishFeed.handle,
    SECRET_ARN: props.publishFeed.blueskySecretArn,
    FEEDGEN_HOSTNAME: domainName,
    FEEDS: JSON.stringify(props.publishFeed.feeds),
  },
});
publishSecret.grantRead(publishFeedFn);

const publishProvider = new Provider(this, `publish-feed-provider`, {
  onEventHandler: publishFeedFn,
});

new CustomResource(this, `publish-feed-resource`, {
  serviceToken: publishProvider.serviceToken,
  properties: {
    Version: `${Date.now()}`,
  },
});

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here we make sure that it has access to our Bluesky App password (created on our Bluesky account's settings page).&lt;/p&gt;

&lt;p&gt;The handler code follows the same process as Bluesky's own &lt;a href="https://github.com/bluesky-social/feed-generator/blob/main/scripts/publishFeedGen.ts" rel="noopener noreferrer"&gt;publishFeedGen.ts&lt;/a&gt; script, except we get the password from the secrets manager first. This code runs with every deployment and supports registering multiple feeds pointing at the same lambda.&lt;/p&gt;

&lt;h2&gt;
  
  
  Tying It All Together: Deploying the Stack
&lt;/h2&gt;

&lt;p&gt;Having created the individual constructs, we now need to assemble them into a &lt;a href="https://github.com/aws-community-projects/bluesky-feed-generator/blob/main/lib/bluesky-feed-generator-stack.ts" rel="noopener noreferrer"&gt;cohesive stack&lt;/a&gt;. This is akin to putting together our camera, lens, and tripod into a complete photography setup.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const vpc = new Vpc(this, "vpc");
const securityGroup = new SecurityGroup(this, "security-group", {
  vpc,
  allowAllOutbound: true,
});

const dbName = 'bluesky';
const { db } = new BlueskyDb(this, 'bluesky-db', {
  dbName,
  vpc,
});

const domainName = 'martz.codes';

new BlueskyParser(this, 'bluesky-parser', {
  db,
  dbName,
  securityGroup,
  vpc,
});

new BlueskyFeed(this, 'bluesky-feed', {
  db,
  dbName,
  domainName,
  publishFeed,
});

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In the code above, we're first setting up a Virtual Private Cloud (VPC) and a security group. This is like choosing a location for our photo shoot and setting up the necessary security measures.&lt;/p&gt;

&lt;p&gt;Next, we're creating our Bluesky database within this VPC. This is akin to setting up our storage system for the photos we'll capture.&lt;/p&gt;

&lt;p&gt;We then instantiate our Bluesky parser and feed constructs, passing in the necessary parameters such as the database, domain name, and security group. This is like setting up our camera and lens, ready to start capturing photos.&lt;/p&gt;

&lt;p&gt;In the &lt;a href="https://github.com/aws-community-projects/bluesky-feed-generator/blob/main/bin/bluesky-feed-generator.ts" rel="noopener noreferrer"&gt;bin file&lt;/a&gt;, we include the required &lt;code&gt;publishFeed&lt;/code&gt; properties:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;publishFeed: {
    handle: 'martz.codes',
    blueskySecretArn: "arn:aws:secretsmanager:us-east-1:359317520455:secret:bluesky-rQXJxQ",
    feeds: {
      "aws-community": {
        displayName: "AWS Community",
        description: "This is a test feed served from an AWS Lambda. It is a list of AWS Employees, AWS Heroes and AWS Community Builders",
      }
    },
  },

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is like setting up the details of our photo gallery, including the name, description, and access credentials.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;It's worth noting that this stack was built iteratively. While it should work as expected, there may be a missing dependency that could affect the deployment order for the CustomResources.&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Once deployed, we can visit the feed's page on Bluesky: &lt;a href="https://bsky.app/profile/did:plc:a62mzn6xxxxwktpdprw2lvnc/feed/aws-community" rel="noopener noreferrer"&gt;https://bsky.app/profile/did:plc:a62mzn6xxxxwktpdprw2lvnc/feed/aws-community&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The URL structure is as follows: &lt;a href="https://bsky.app/profile/%3Cowner's" rel="noopener noreferrer"&gt;&lt;code&gt;https://bsky.app/profile/&amp;lt;owner's&lt;/code&gt;&lt;/a&gt; &lt;code&gt;DID&amp;gt;/feed/&amp;lt;short-name&amp;gt;&lt;/code&gt;. When this URL is loaded in Bluesky, the Bluesky service makes a call to the feed URL. The feed then replies with several DIDs of posts, which the Bluesky service hydrates. This is like visiting our online photo gallery, where the service fetches and displays the photos based on the visitor's request.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;And there you have it! We've walked through the process of deploying a Bluesky feed generator using AWS services, specifically AWS CDK, Fargate, Lambda, and Aurora Serverless. This setup allows us to parse the Bluesky event stream, store relevant events in a database, and serve the feed using a serverless architecture. It's like setting up a fully automated photography studio that captures, stores, and displays photos based on specific criteria.&lt;/p&gt;

&lt;p&gt;But this is just the beginning. There are countless ways you could expand on this setup to suit your specific needs or explore new possibilities. Here are a few ideas to get you started:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Customize Your Feed Algorithm:&lt;/strong&gt; The feed algorithm we used in this example is relatively simple, focusing on posts from specific authors. You could expand on this by incorporating more complex criteria, such as keywords, hashtags, or even sentiment analysis. This would be like using advanced filters or editing techniques to select and enhance your photos.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Integrate with Other Services:&lt;/strong&gt; You could integrate your feed generator with other AWS services or third-party APIs to add more functionality. For example, you could use AWS Comprehend to analyze the sentiment of posts, AWS Translate to support multiple languages, or AWS SNS to send notifications when new posts are added to the feed.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Create a User Interface:&lt;/strong&gt; While Bluesky provides a platform to view the feeds, you could also create your own user interface to display the feeds in a unique way. This could be a web app, a mobile app, or even an Alexa skill. This would be like creating your own online gallery or photo app to showcase your photos.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Scale Up:&lt;/strong&gt; Our setup is designed to be scalable, but you could take this further by implementing more advanced scaling strategies. For example, you could use AWS Auto Scaling to automatically adjust the capacity of your Fargate service based on demand, or AWS ElastiCache to improve the performance of your database.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Secure Your Setup:&lt;/strong&gt; While we've implemented basic security measures, there's always more you can do to protect your data and your users. You could use AWS Shield for DDoS protection, AWS WAF for web application firewall, or AWS Macie to discover, classify, and protect sensitive data.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Remember, the sky's the limit when it comes to what you can achieve with AWS services and Bluesky. So don't be afraid to experiment, innovate, and create something truly unique.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Destroy THEIR Stacks - Ephemeral CDK Stacks as a Service</title>
      <dc:creator>Matt Martz</dc:creator>
      <pubDate>Thu, 29 Jun 2023 15:53:02 +0000</pubDate>
      <link>https://forem.com/aws-builders/destroy-their-stacks-ephemeral-cdk-stacks-as-a-service-5fa</link>
      <guid>https://forem.com/aws-builders/destroy-their-stacks-ephemeral-cdk-stacks-as-a-service-5fa</guid>
      <description>&lt;p&gt;In this post, we will enhance our ephemeral stack architecture by consolidating the destruction process to a central service. We will utilize a stack lifetime tag in conjunction with the &lt;code&gt;MakeDestroyable&lt;/code&gt; aspect from the &lt;a href="https://www.npmjs.com/package/@aws-community/ephemeral" rel="noopener noreferrer"&gt;@aws-community/ephemeral&lt;/a&gt; npm library.&lt;/p&gt;

&lt;p&gt;Ephemeral stacks are temporary stacks in AWS that are designed to exist for a short period of time. This is particularly useful in development environments where you want to test something but dont need the stack to be up indefinitely.&lt;/p&gt;

&lt;p&gt;This article is a follow-up to two previous posts on the topic of ephemeral stacks:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://matt.martz.codes/say-goodbye-to-your-cdk-stacks-a-guide-to-self-destruction" rel="noopener noreferrer"&gt;Say Goodbye to Your CDK Stacks: A Guide to Self-Destruction&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://dev.to/martzcodes/blink-and-its-gone-embracing-ephemeral-cdk-stacks-for-efficient-devops-3f42-temp-slug-4420142"&gt;Blink and It's Gone: Embracing Ephemeral CDK Stacks for Efficient DevOps&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  The Value of Ephemeral Stacks
&lt;/h2&gt;

&lt;p&gt;But why would you want to use ephemeral stacks?&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Cost Savings 💰&lt;/strong&gt; : By using resources only for the time needed, you can significantly reduce costs. You no longer have to worry about unused resources accumulating costs because the stacks self-terminate after the stipulated period.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Efficient Resource Allocation 🔄&lt;/strong&gt; : In fast-paced development environments, resources are constantly being allocated and deallocated. Ephemeral stacks make this process more efficient, ensuring that resources are available when needed and are released when no longer in use.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Reduced Complexity 🧠&lt;/strong&gt; : Keeping track of which resources are actively being used can be a complex task. By using ephemeral stacks, you know that any active resource is being used for a good reason. This reduces the complexity of managing your infrastructure.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Enhanced Security 🔒&lt;/strong&gt; : Minimizing the lifespan of your stacks reduces the exposure window for potential security vulnerabilities. By limiting the duration a resource is up, you inherently limit the time it can be exploited.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Realistic Testing Environments&lt;/strong&gt; 🧪: Ephemeral stacks are great for simulating production environments without the permanence. They allow you to conduct realistic tests and experiments, enabling you to glean insights and identify issues that might not be evident in traditional development environments.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Simplified Clean-Up&lt;/strong&gt; 🧹: Forget the days of manually cleaning up resources post-testing. With the self-destruction aspect of ephemeral stacks, the clean-up is automatic. This not only saves time but also ensures that no remnants are left behind that can cause clutter or additional costs.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Easy Scalability for Temporary Needs&lt;/strong&gt; : Sometimes you need to scale resources quickly to meet a temporary need (e.g., a one-time data processing job). Ephemeral stacks allow for such scalability without the long-term commitment.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Armed with these benefits, its clear that ephemeral stacks are an incredibly powerful tool for optimizing AWS resource management, especially in development environments. Let's dive into how we can further improve the architecture by consolidating the destruction process.&lt;/p&gt;

&lt;h2&gt;
  
  
  Understanding the Key Components
&lt;/h2&gt;

&lt;p&gt;We will go through the changes to the &lt;a href="https://www.npmjs.com/package/@aws-community/ephemeral" rel="noopener noreferrer"&gt;@aws-community/ephemeral&lt;/a&gt; npm library and demonstrate how it can be used.&lt;/p&gt;

&lt;p&gt;The code for the &lt;a href="https://www.npmjs.com/package/@aws-community/ephemeral" rel="noopener noreferrer"&gt;@aws-community/ephemeral&lt;/a&gt; npm library is here: &lt;a href="https://github.com/aws-community-projects/ephemeral" rel="noopener noreferrer"&gt;https://github.com/aws-community-projects/ephemeral&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The example project that uses it is here: &lt;a href="https://github.com/martzcodes/blog-ephemeral" rel="noopener noreferrer"&gt;https://github.com/martzcodes/blog-ephemeral&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  The DestroyMe Stack and Construct
&lt;/h3&gt;

&lt;p&gt;The &lt;code&gt;DestroyMeConstruct&lt;/code&gt; uses the &lt;code&gt;SelfDestructAspect&lt;/code&gt; from previous posts to ensure that all of the AWS Resources in the stack are set to a DESTROY retention policy. Additionally, it sets a &lt;code&gt;STACK_LIFE&lt;/code&gt; tag on the stack, which indicates how long the stack should remain if there are no updates to it. This tag will be used by an external service to pick up and process the stack for destruction. Heres the code snippet for this part:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Tags.of(Stack.of(this)).add('STACK_LIFE', duration.toSeconds().toString());
Aspects.of(Stack.of(this)).add(new SelfDestructAspect());

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;DestroyMeStack&lt;/code&gt; is a higher-level construct that simply includes &lt;code&gt;DestroyMeConstruct&lt;/code&gt;, making it convenient to extend.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Destroyer Stack
&lt;/h3&gt;

&lt;p&gt;The &lt;code&gt;DestroyerStack&lt;/code&gt; is the core of this enhancement. Instead of having each stack deploy a step function that will self-destroy, which could lead to conflicts or complications, we centralize the destruction process.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;DestroyerStack&lt;/code&gt; uses &lt;a href="https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-service-event-list.html" rel="noopener noreferrer"&gt;AWS Service Events&lt;/a&gt; from &lt;a href="https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/stacks-event-bridge.html" rel="noopener noreferrer"&gt;CloudFormation&lt;/a&gt; to detect stacks that have the &lt;code&gt;STACK_LIFE&lt;/code&gt; tag. Every time a CDK Stack deploys, it generates a CloudFormation Stack Status event. Using this event, we can fetch the stack details, including the tags, and determine if we should track the stack for deletion.&lt;/p&gt;

&lt;p&gt;If the stack has the &lt;code&gt;STACK_LIFE&lt;/code&gt; tag, we add an entry into a DynamoDB table with a &lt;code&gt;TimeToLive&lt;/code&gt; (TTL) property. This TTL is the sum of the current time and the stack life duration. When DynamoDB removes the item due to TTL expiration, we trigger a Lambda function to delete the stack.&lt;/p&gt;

&lt;p&gt;Here's how the table is created:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const tableName = 'destroyer';
const table = new Table(this, tableName, {
  tableName,
  partitionKey: {
    name: 'pk',
    type: AttributeType.STRING,
  },
  billingMode: BillingMode.PAY_PER_REQUEST,
  removalPolicy: RemovalPolicy.DESTROY,
  timeToLiveAttribute: 'ttl',
  stream: StreamViewType.NEW_AND_OLD_IMAGES,
});

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In case the stack deletion fails, we can also track the &lt;code&gt;DELETE FAILED&lt;/code&gt; status and send notifications to an SNS Topic for manual intervention.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const failTopic = new Topic(this, 'fail-topic');
new Rule(this, 'delete-failed-rule', {
  eventPattern: {
    source: ['aws.cloudformation'],
    detailType: ['CloudFormation Stack Status Change'],
    detail: {
      resourceStatus: ['DELETE_FAILED'],
    },
  },
  targets: [new SnsTopic(failTopic)],
});

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  CloudFormation Event Function
&lt;/h3&gt;

&lt;p&gt;This Lambda function is triggered by AWS Service Events. It retrieves information from the CloudFormation service and writes to the DynamoDB table.&lt;/p&gt;

&lt;p&gt;Here's how the Lambda function is configured:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const cloudformationFn = new NodejsFunction(this, 'fn-cloudformation', {
  runtime: Runtime.NODEJS_18_X,
  memorySize: 1024,
  timeout: Duration.minutes(5),
  entry: join(__dirname, local ? 'destroyer-stack.fn-cloudformation.ts' : 'destroyer-stack.fn-cloudformation.js'),
  initialPolicy: [
    new PolicyStatement({
      effect: Effect.ALLOW,
      actions: [
        'cloudformation:Describe*',
        'cloudformation:Get*',
        'cloudformation:List*',
      ],
      resources: ['*'],
    }),
  ],
});
table.grantReadWriteData(cloudformationFn);
cloudformationFn.addEnvironment('DESTROY_TABLE_NAME', table.tableName);

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And then we trigger the lambda on those AWS Service Events:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;new Rule(this, 'cloudformation-rule', {
  eventPattern: {
    source: ['aws.cloudformation'],
    detailType: ['CloudFormation Stack Status Change'],
  },
  targets: [new LambdaFunction(cloudformationFn)],
});

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In our case we don't really care if the stack successfully deployed or not. We reset the ttl with every deployment (failure or not). Since a developer is actively working on the project, we don't want to delete it.&lt;/p&gt;

&lt;p&gt;The lambda handler code simply describes the stack and if the &lt;code&gt;STACK_LIFE&lt;/code&gt; tag exists, it puts the item into DynamoDB with the StackName as the primary key.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const StackName = event.detail['stack-id'];
const describeCommand = new DescribeStacksCommand({
StackName,
});
const stacks = await cf.send(describeCommand);
const stack = stacks.Stacks?.[0];
const stackLife = stack?.Tags?.find((tag) =&amp;gt; tag.Key === 'STACK_LIFE')?.Value;
if (stackLife) {
    try {
      await ddbDocClient.send(
        new PutCommand({
          TableName: process.env.DESTROY_TABLE_NAME,
          Item: {
            pk: stack.StackName,
            ttl: Math.ceil(new Date().getTime() / 1000 + Number(stackLife)),
          },
        }),
      );
    } catch (e) {
      console.log(e);
    }
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Destroy Function
&lt;/h3&gt;

&lt;p&gt;The destroy function operates similarly. We make sure that it is triggered from the DynamoDB Stream and that it has access to &lt;code&gt;cloudformation:DeleteStack&lt;/code&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;destroyFn.addEventSource(
  new DynamoEventSource(table, {
    startingPosition: StartingPosition.LATEST,
  }),
);
destroyFn.addToRolePolicy(
  new PolicyStatement({
    actions: ['cloudformation:DeleteStack'],
    resources: ['*'],
    effect: Effect.ALLOW,
  }),
);

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The destroy function handler code filters the dynamodb stream records to make sure that the item is being removed and the the ttl is actually expired. For safety there's an escape hatch that you can remove an item from DynamoDB if it's before the expiration, and it won't delete the stack.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const currentTimeInSeconds = new Date().getTime() / 1000;
if (item.ttl &amp;gt; currentTimeInSeconds) {
  // item was manually removed and not expired
  console.log('item was manually removed and not expired', currentTimeInSeconds, item.ttl);
  return [...p];
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then it removes the the valid expired stacks:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const client = new CloudFormationClient({});
await Promise.all(
  stacksToDestroy.map(
    async (stackName) =&amp;gt;
      await client.send(
        new DeleteStackCommand({
          StackName: stackName,
        }),
      ),
  ),
);

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  How to Use @aws-community/ephemeral
&lt;/h2&gt;

&lt;p&gt;Example Code for this section is located here: &lt;a href="https://github.com/martzcodes/blog-ephemeral" rel="noopener noreferrer"&gt;https://github.com/martzcodes/blog-ephemeral&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;First, you need to deploy the DestroyerStack. You can do this in a separate project or by manually deploying the stack with &lt;code&gt;npx cdk deply DestroyerStack&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import { DestroyerStack } from '@aws-community/ephemeral';

const app = new cdk.App();
new DestroyerStack(app, 'DestroyerStack');

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once the DestroyerStack is in-place and monitoring the AWS Service Events, you can make any of your stacks ephemeral by extending the &lt;code&gt;DestroyMeStack&lt;/code&gt; or adding the &lt;code&gt;DestroyMeConstruct&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Here, we have extended the stack:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import { DestroyMeStack, DestroyMeStackProps } from '@aws-community/ephemeral';
import { Construct } from 'constructs';

export class BlogEphemeralStack extends DestroyMeStack {
  constructor(scope: Construct, id: string, props: DestroyMeStackProps) {
    super(scope, id, props);
    // your stuff here
  }
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;and then we can deploy it using &lt;code&gt;npx cdk deploy EphemeralStack&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;new BlogEphemeralStack(app, 'EphemeralStack', {
  destroyMeEnable: true,
  destroyMeDuration: cdk.Duration.minutes(3),
});

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/howitworks-ttl.html" rel="noopener noreferrer"&gt;&lt;strong&gt;&lt;em&gt;It is important to note that the Dynamo TTL timing is NOT exact&lt;/em&gt;&lt;/strong&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;TTL typically deletes expired items within a few days. Depending on the size and activity level of a table, the actual delete operation of an expired item can vary. Because TTL is meant to be a background process, the nature of the capacity used to expire and delete items via TTL is variable (but free of charge).&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;If you need to delete the stack sooner, you can manually do it or delete the item from DynamoDB after the ttl has expired.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;In this blog post, we dove into enhancing our ephemeral stack architecture by centralizing the stack destruction process and employing a stack life tag with the MakeDestroyable aspect from the &lt;code&gt;@aws-community/ephemeral&lt;/code&gt; npm library. This approach ensures that all of the AWS resources in the stack are set to a DESTROY retention and also sets a &lt;code&gt;STACK_LIFE&lt;/code&gt; tag, indicating the lifetime of the stack in the absence of updates.&lt;/p&gt;

&lt;p&gt;To summarize the key enhancements:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Centralized Destruction Service&lt;/strong&gt; : The centralization of destruction using the &lt;code&gt;DestroyerStack&lt;/code&gt; minimizes the risks of conflicts and complications, making it more efficient.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;AWS Service Events&lt;/strong&gt; : Utilizing AWS Service Events to detect stacks with the &lt;code&gt;STACK_LIFE&lt;/code&gt; tag enables automation and efficiency in monitoring and managing the lifetime of resources.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Automated Cleanup&lt;/strong&gt; : The architecture now has an automated cleanup mechanism, which will be triggered based on the &lt;code&gt;STACK_LIFE&lt;/code&gt; tag, and if there's a failure in the cleanup process, you will be notified via SNS.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Enhanced Resource Management&lt;/strong&gt; : With this setup, resources can be more efficiently managed, particularly during development stages where resource provisioning might be ephemeral.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;This enhancement is particularly beneficial for DevOps environments, where teams frequently create and destroy resources for testing and development purposes. By automating the destruction of temporary resources, teams can ensure that only necessary resources are retained, leading to cost savings and more manageable infrastructure.&lt;/p&gt;

&lt;p&gt;However, do remember that the timing for deletion with DynamoDB's TTL is not precise. If you require more exact timing for resource cleanup, additional manual steps may be necessary.&lt;/p&gt;

&lt;p&gt;By integrating these enhancements into your ephemeral stack architecture, youll enable more streamlined, automated, and efficient resource management within your AWS environment.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Creating an Aurora MySQL Database and Setting Up a Kinesis CDC Stream with AWS CDK</title>
      <dc:creator>Matt Martz</dc:creator>
      <pubDate>Mon, 26 Jun 2023 15:39:26 +0000</pubDate>
      <link>https://forem.com/aws-builders/creating-an-aurora-mysql-database-and-setting-up-a-kinesis-cdc-stream-with-aws-cdk-38fb</link>
      <guid>https://forem.com/aws-builders/creating-an-aurora-mysql-database-and-setting-up-a-kinesis-cdc-stream-with-aws-cdk-38fb</guid>
      <description>&lt;p&gt;Welcome to this comprehensive guide where we will be using the AWS Cloud Development Kit (CDK) to create an Aurora MySQL Database, initialize it using Custom Resources, and set up a Change Data Capture (CDC) Stream with Amazon Data Migration Service (DMS) and Kinesis.&lt;/p&gt;

&lt;p&gt;This post builds upon the concepts introduced in &lt;a href="https://matt.martz.codes/how-to-use-binlogs-to-make-an-aurora-mysql-event-stream" rel="noopener noreferrer"&gt;How to Use BinLogs to Make an Aurora MySQL Event Stream&lt;/a&gt;. Instead of relying on a lambda to parse the BinLog periodically, we'll be leveraging the capabilities of AWS DMS. The future integration of &lt;a href="https://aws.amazon.com/blogs/aws/new-aws-dms-serverless-automatically-provisions-and-scales-capacity-for-migration-and-data-replication/" rel="noopener noreferrer"&gt;Serverless DMS&lt;/a&gt; with CloudFormation promises to further enhance this system.&lt;/p&gt;

&lt;p&gt;You can find the code for this project on &lt;a href="https://github.com/martzcodes/blog-dms-stream" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;To ensure clarity and organization, our project will be structured into two separate stacks: the DatabaseStack and the DMS Stack. The DatabaseStack includes the VPC, the Aurora MySQL database, and the lambdas responsible for initializing and seeding the database. The DMS Stack encompasses DMS, the CustomResources that manage the DMS Replication Task, and the target Kinesis Stream.&lt;/p&gt;

&lt;p&gt;This division allows us to accommodate those who already have a VPC and Aurora MySQL database in place. If you fall into this category, you can easily integrate your existing resources into the DMS stack.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 1: Creating the VPC and Aurora MySQL Database
&lt;/h2&gt;

&lt;p&gt;Our first step is to create the DatabaseStack, which involves setting up the Aurora MySQL database and the VPC. You can find the code for this step &lt;a href="https://github.com/martzcodes/blog-dms-stream/blob/main/lib/db-stack.ts" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;When creating the VPC, it's crucial to ensure that it includes a NAT Gateway. This gateway allows instances in a private subnet to connect to the internet or other AWS Services while preventing inbound connections from the internet. This is essential because our resources within the VPC need to communicate with AWS Services. Fortunately, VPCs are equipped with a NAT Gateway by default.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const vpc = new Vpc(this, "vpc", {
  maxAzs: 2,
});

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Next, we create the database cluster using the appropriate code.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const db = new DatabaseCluster(this, "db", {
  clusterIdentifier: `db`,
  credentials: Credentials.fromGeneratedSecret("admin"),
  defaultDatabaseName: dbName,
  engine: DatabaseClusterEngine.auroraMysql({
    version: AuroraMysqlEngineVersion.VER_3_03_0,
  }),
  iamAuthentication: true,
  instanceProps: {
    instanceType: InstanceType.of(InstanceClass.T4G, InstanceSize.MEDIUM),
    vpc,
    vpcSubnets: {
      onePerAz: true,
    },
  },
  removalPolicy: RemovalPolicy.DESTROY,
  parameters: {
    binlog_format: "ROW",
    log_bin_trust_function_creators: "1",
    // https://aws.amazon.com/blogs/database/introducing-amazon-aurora-mysql-enhanced-binary-log-binlog/
    aurora_enhanced_binlog: "1",
    binlog_backup: "0",
    binlog_replication_globaldb: "0"
  },
});
db.connections.allowDefaultPortInternally();

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now that the database is created, we need to initialize the schema. We achieve this by utilizing a &lt;a href="https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/template-custom-resources.html" rel="noopener noreferrer"&gt;CustomResource&lt;/a&gt;, which triggers actions during a CloudFormation stack deployment. In our case, we'll trigger a lambda function that connects to the database and creates a table. This CustomResource can also be used to create users or seed the database with data, but for now, we'll focus on creating an empty table.&lt;/p&gt;

&lt;p&gt;The first step in this process is to create the lambda. We ensure that the lambda has access to the database's secret (which is automatically created) with &lt;code&gt;db.secret?.grantRead(initFn)&lt;/code&gt;. This secret contains the credentials that the lambda needs to connect to the database.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const initFn = new NodejsFunction(this, `db-init`, {
  ...lambdaProps,
  entry: join(__dirname, "lambda/db-init.ts"),
  environment: {
    SECRET_ARN: secret.secretArn,
    DB_NAME: dbName,
    TABLE_NAME: tableName,
  },
  vpc,
  vpcSubnets: {
    onePerAz: true,
  },
  securityGroups: db.connections.securityGroups,
});
db.secret?.grantRead(initFn);
initFn.node.addDependency(db);

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;a href="https://github.com/martzcodes/blog-dms-stream/blob/main/lib/lambda/db-init.ts" rel="noopener noreferrer"&gt;lambda handler code&lt;/a&gt; is responsible for creating the table, and we ensure that this action is carried out when the stack is created:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import type {
  CloudFormationCustomResourceEvent,
  CloudFormationCustomResourceFailedResponse,
  CloudFormationCustomResourceSuccessResponse,
} from 'aws-lambda';

import { getConnectionPool } from './utils/connection';

export const handler = async (
  event: CloudFormationCustomResourceEvent,
): Promise&amp;lt;CloudFormationCustomResourceSuccessResponse | CloudFormationCustomResourceFailedResponse&amp;gt; =&amp;gt; {
  switch (event.RequestType) {
    case 'Create':
      try {
        const connection = await getConnectionPool();

        await connection.query(
          "CALL mysql.rds_set_configuration('binlog retention hours', 24);"
        );

        await connection.query(`DROP TABLE IF EXISTS ${process.env.DB_NAME}.${process.env.TABLE_NAME};`);
        await connection.query(`CREATE TABLE ${process.env.DB_NAME}.${process.env.TABLE_NAME} (id INT NOT NULL AUTO_INCREMENT, example VARCHAR(255) NOT NULL, PRIMARY KEY (id));`);

        return { ...event, PhysicalResourceId: `init-db`, Status: 'SUCCESS' };
      } catch (e) {
        console.error(`initialization failed!`, e);
        return { ...event, PhysicalResourceId: `init-db`, Reason: (e as Error).message, Status: 'FAILED' };
      }
    default:
      console.error('No op for', event.RequestType);
      return { ...event, PhysicalResourceId: 'init-db', Status: 'SUCCESS' };
  }
};

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To ensure that the lambda is invoked as part of the stack deployment, we &lt;a href="https://docs.aws.amazon.com/cdk/api/v2/docs/aws-cdk-lib.custom_resources-readme.html" rel="noopener noreferrer"&gt;create a provider for the lambda and a CustomResource&lt;/a&gt;. The provider specifies the lambda to be invoked, and the CustomResource triggers the invocation when the stack is deployed. This ensures that the database initialization is fully integrated into the stack deployment process.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const initProvider = new Provider(this, `init-db-provider`, {
  onEventHandler: initFn,
});

new CustomResource(this, `init-db-resource`, {
  serviceToken: initProvider.serviceToken,
});

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Step 2: Understanding CloudFormation and DMS Streams
&lt;/h2&gt;

&lt;p&gt;DMS Change Data Capture replication relies on MySQL's Binlog. To enable DMS, binlog must be enabled in MySQL. When creating the database in the previous step, we included parameters that enable Aurora's enhanced binlog for improved performance. More information about this feature can be found &lt;a href="https://aws.amazon.com/blogs/database/introducing-amazon-aurora-mysql-enhanced-binary-log-binlog/" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;binlog_format: "ROW",
log_bin_trust_function_creators: "1",
aurora_enhanced_binlog: "1",
binlog_backup: "0",
binlog_replication_globaldb: "0"

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Moving on, we can now create the stack that contains the DMS Replication Task and Kinesis stream. You can access the relevant code &lt;a href="https://github.com/martzcodes/blog-dms-stream/blob/main/lib/blog-dms-stream-stack.ts" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;First, we create the Kinesis stream that will serve as the target for the events.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const dbStream = new Stream(this, `db-stream`, {
  streamName: `db-stream`,
  streamMode: StreamMode.ON_DEMAND,
});

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;DMS requires a role called &lt;code&gt;dms-vpc-role&lt;/code&gt; to function correctly, but it doesn't have the necessary permissions by default. Therefore, we need to manually create this role.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const dmsRole = new Role(this, `dms-role`, {
  roleName: `dms-vpc-role`, // need the name for this one
  assumedBy: new ServicePrincipal("dms.amazonaws.com"),
});
dmsRole.addManagedPolicy(
  ManagedPolicy.fromManagedPolicyArn(this, `AmazonDMSVPCManagementRole`, `arn:aws:iam::aws:policy/service-role/AmazonDMSVPCManagementRole`)
);

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Next, we create the Replication Subnet Group that DMS will use to connect to the database. Since it would attempt to create the &lt;code&gt;dms-vpc-role&lt;/code&gt; with the wrong permissions, we need to ensure that it uses the existing role we created. This requires adding a dependency between the two resources.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const dmsSubnet = new CfnReplicationSubnetGroup(this, `dms-subnet`, {
  replicationSubnetGroupDescription: "DMS Subnet",
  subnetIds: vpc.selectSubnets({
    onePerAz: true,
  }).subnetIds,
});
dmsSubnet.node.addDependency(dmsRole);

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now we can create the replication instance itself, utilizing the subnet we just created. For simplicity, we are using the smallest instance class, although ideally, we would support using &lt;a href="https://aws.amazon.com/blogs/aws/new-aws-dms-serverless-automatically-provisions-and-scales-capacity-for-migration-and-data-replication/" rel="noopener noreferrer"&gt;Serverless DMS&lt;/a&gt;. Unfortunately, CloudFormation does not yet provide support for this.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const dmsRep = new CfnReplicationInstance(this, `dms-replication`, {
  replicationInstanceClass: "dms.t2.micro",
  multiAz: false,
  publiclyAccessible: false,
  replicationSubnetGroupIdentifier: dmsSubnet.ref,
  vpcSecurityGroupIds: securityGroups.map(
    (sg) =&amp;gt; sg.securityGroupId
  ),
});

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To enable DMS to connect to Aurora, we need to grant it access to the role created by the database. We accomplish this by manually creating a Role, granting it permission to read the database's secret, and providing it as the source endpoint for DMS.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const dmsSecretRole = new Role(this, `dms-secret-role`, {
  assumedBy: new ServicePrincipal(
    `dms.${Stack.of(this).region}.amazonaws.com`
  ),
});
secret.grantRead(dmsSecretRole);

const source = new CfnEndpoint(this, `dms-source-endpoint`, {
  endpointType: "source",
  engineName: "aurora",
  mySqlSettings: {
    secretsManagerAccessRoleArn: dmsSecretRole.roleArn,
    secretsManagerSecretId: secret.secretName,
  },
});

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Since our target is Kinesis, we also need to create a "target" endpoint and assign a role that has access to put records on the Kinesis stream.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const streamWriterRole = new Role(this, `dms-stream-role`, {
  assumedBy: new ServicePrincipal(
    `dms.${Stack.of(this).region}.amazonaws.com`
  ),
});

streamWriterRole.addToPolicy(
  new PolicyStatement({
    resources: [dbStream.streamArn],
    actions: [
      "kinesis:DescribeStream",
      "kinesis:PutRecord",
      "kinesis:PutRecords",
    ],
  })
);

const target = new CfnEndpoint(this, `dms-target-endpoint`, {
  endpointType: "target",
  engineName: "kinesis",
  kinesisSettings: {
    messageFormat: "JSON",
    streamArn: dbStream.streamArn,
    serviceAccessRoleArn: streamWriterRole.roleArn,
  },
});

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Finally, we create the replication task itself. We provide a generic table mapping that emits events for changes to any table. It's worth noting that wildcards can be used to restrict the mapping to specific tables if desired. For more information about table mappings take a look at the &lt;a href="https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Tasks.CustomizingTasks.TableMapping.html" rel="noopener noreferrer"&gt;docs&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const dmsTableMappings = {
  rules: [
    {
      "rule-type": "selection",
      "rule-id": "1",
      "rule-name": "1",
      "object-locator": {
        "schema-name": dbName,
        "table-name": "%",
        "table-type": "table",
      },
      "rule-action": "include",
      filters: [],
    },
  ],
};
const task = new CfnReplicationTask(this, `dms-stream-rep`, {
  replicationInstanceArn: dmsRep.ref,
  migrationType: "cdc",
  sourceEndpointArn: source.ref,
  targetEndpointArn: target.ref,
  tableMappings: JSON.stringify(dmsTableMappings),
  replicationTaskSettings: JSON.stringify({
    BeforeImageSettings: {
      EnableBeforeImage: true,
      FieldName: "before",
      ColumnFilter: "all",
    }
  }),
});

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Additionally, we provide &lt;a href="https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Tasks.CustomizingTasks.TaskSettings.BeforeImage.html" rel="noopener noreferrer"&gt;&lt;code&gt;BeforeImageSettings&lt;/code&gt;&lt;/a&gt; in the &lt;code&gt;replicationTaskSettings&lt;/code&gt;, which enables us to include a before image for row updates, allowing us to infer deltas on table row updates. For Change Data Capture, we are using the migration type &lt;code&gt;cdc&lt;/code&gt; since we are not migrating existing data.&lt;/p&gt;

&lt;h3&gt;
  
  
  Important Considerations for CloudFormation and DMS
&lt;/h3&gt;

&lt;p&gt;CloudFormation can NOT update DMS tasks that are actively running. It also does not automatically start a DMS Replication Task as part of the deployment. In order to get around this we'll set up some CustomResources and enforce their order that one will run before DMS changes, and one will run after.&lt;/p&gt;

&lt;p&gt;The &lt;a href="https://github.com/martzcodes/blog-dms-stream/blob/main/lib/lambda/dms-pre.ts" rel="noopener noreferrer"&gt;"pre" lambda&lt;/a&gt; will check if there are changes to DMS in the CloudFormation change set, if there are it will check if DMS is running, and if it is it will stop the task. It will also wait for the task to finish stopping before responding.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const StackName = `${process.env.STACK_NAME}`;
if (!ReplicationTaskArn) {
  ReplicationTaskArn = await getDmsTask({ cf, StackName });
}
const status = await getDmsStatus({ dms, ReplicationTaskArn });
if (status === 'running') {
  if (event.RequestType === 'Delete' || await hasDmsChanges({ cf, StackName })) {
    // pause task
    const stopCmd = new StopReplicationTaskCommand({
      ReplicationTaskArn,
    });
    await dms.send(stopCmd);
    // wait for task to be fully paused
    await waitForDmsStatus({ dms, ReplicationTaskArn, targetStatus: 'stopped' });
  }
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;On the other end, the &lt;a href="https://github.com/martzcodes/blog-dms-stream/blob/main/lib/lambda/dms-post.ts" rel="noopener noreferrer"&gt;"post" lambda&lt;/a&gt; will do the opposite. It will start (or resume) the DMS replication and wait for it to finish spinning up.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  const startCmd = new StartReplicationTaskCommand({
    ReplicationTaskArn,
    StartReplicationTaskType: "resume-processing",
  });
  await dms.send(startCmd);
  await waitForDmsStatus({
    dms,
    ReplicationTaskArn,
    targetStatus: "running",
  });

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Additionally, we set up a lambda function that is invoked by the Kinesis stream. We add the stream as an event source for this lambda function.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const kinesisFn = new NodejsFunction(this, `stream-kinesis`, {
  ...lambdaProps,
  entry: join(__dirname, "lambda/stream-subscriber.ts"),
  tracing: Tracing.ACTIVE,
});

kinesisFn.addEventSource(
  new KinesisEventSource(dbStream, {
    batchSize: 100, // default
    startingPosition: StartingPosition.LATEST,
    filters: [
      { pattern: JSON.stringify({ partitionKey: [`${dbName}.${tableName}`] }) },
    ],
  })
);

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Step 3: Testing the Event Stream
&lt;/h2&gt;

&lt;p&gt;With both stacks deployed, we can now test DMS. To facilitate this process, a lambda function has been created, and its code can be accessed &lt;a href="https://github.com/martzcodes/blog-dms-stream/blob/main/lib/lambda/db-seed.ts" rel="noopener noreferrer"&gt;here&lt;/a&gt;. You can invoke this function using test events via the AWS console.&lt;/p&gt;

&lt;p&gt;By logging in to the DMS console, we can observe that the replication task is already running, thanks to the CustomResources.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkkjudl33ylv89tp9inr9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkkjudl33ylv89tp9inr9.png" width="800" height="768"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To view the table statistics for the task, we can see that our schema has been identified. If there were additional tables, they would also be listed here, but in our case, we only have the &lt;code&gt;examples&lt;/code&gt; table.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fld7lieq1rh3pgsf0gob9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fld7lieq1rh3pgsf0gob9.png" width="800" height="517"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Invoking our seed lambda function will insert a row into the table. After a short time, the table statistics page will reflect the insert operation.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxqhqy0bj9ea01zohw5mp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxqhqy0bj9ea01zohw5mp.png" width="800" height="222"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If we invoke a &lt;a href="https://github.com/martzcodes/blog-dms-stream/blob/main/lib/lambda/stream-subscriber.ts" rel="noopener noreferrer"&gt;lambda from the kinesis stream&lt;/a&gt;, it will get the following event:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
    "record": {
        "kinesis": {
            "kinesisSchemaVersion": "1.0",
            "partitionKey": "blog.examples",
            "sequenceNumber": "49642050638404656466522708801490648817992453925189451794",
            "data": "ewoJImRhdGEiOgl7CgkJImlkIjoJMSwKCQkiZXhhbXBsZSI6CSJoZWxsbyA2MzkiCgl9LAoJIm1ldGFkYXRhIjoJewoJCSJ0aW1lc3RhbXAiOgkiMjAyMy0wNi0yNVQxNToyMToxNC4wNTUxMzdaIiwKCQkicmVjb3JkLXR5cGUiOgkiZGF0YSIsCgkJIm9wZXJhdGlvbiI6CSJpbnNlcnQiLAoJCSJwYXJ0aXRpb24ta2V5LXR5cGUiOgkic2NoZW1hLXRhYmxlIiwKCQkic2NoZW1hLW5hbWUiOgkiYmxvZyIsCgkJInRhYmxlLW5hbWUiOgkiZXhhbXBsZXMiLAoJCSJ0cmFuc2FjdGlvbi1pZCI6CTEyODg0OTAyNjA5Cgl9Cn0=",
            "approximateArrivalTimestamp": 1687706474.102
        },
        "eventSource": "aws:kinesis",
        "eventVersion": "1.0",
        "eventID": "shardId-000000000001:49642050638404656466522708801490648817992453925189451794",
        "eventName": "aws:kinesis:record",
        "invokeIdentityArn": "arn:aws:iam::359317520455:role/BlogDmsStreamStack-streamkinesisServiceRole6A79529-7U8Q9JUVULLO",
        "awsRegion": "us-east-1",
        "eventSourceARN": "arn:aws:kinesis:us-east-1:359317520455:stream/db-stream"
    }
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It's worth noting that Kinesis Base64 encodes the data, so decoding is necessary to make it usable. By decoding the event's data we get the following:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
    "data": {
        "id": 1,
        "example": "hello 639"
    },
    "metadata": {
        "timestamp": "2023-06-25T15:21:14.055137Z",
        "record-type": "data",
        "operation": "insert",
        "partition-key-type": "schema-table",
        "schema-name": "blog",
        "table-name": "examples",
        "transaction-id": 12884902609
    }
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;By examining the decoded event, we can determine that it was an "insert" operation, and the "data" field contains the full row. In this case, since it was an insert, there is no "before" image.&lt;/p&gt;

&lt;p&gt;If a row is updated, the event will contain an image representing the changes.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
    "parsed": {
        "data": {
            "id": 1,
            "example": "hello 297"
        },
        "before": {
            "id": 1,
            "example": "hello 639"
        },
        "metadata": {
            "timestamp": "2023-06-25T15:50:51.449661Z",
            "record-type": "data",
            "operation": "update",
            "partition-key-type": "schema-table",
            "schema-name": "blog",
            "table-name": "examples",
            "transaction-id": 12884903827
        }
    }
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;From there you could calculate a diff to see that the &lt;code&gt;example&lt;/code&gt; column went from &lt;code&gt;hello 639&lt;/code&gt; to &lt;code&gt;hello 297&lt;/code&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;In conclusion, this comprehensive guide has provided you with the knowledge and steps necessary to create an Aurora MySQL Database, initialize it using Custom Resources, and set up a Change Data Capture (CDC) Stream with AWS CDK, AWS Data Migration Service (DMS), and Kinesis. By leveraging AWS DMS and event-driven architecture principles, you can unlock the full potential of real-time data replication and event streaming.&lt;/p&gt;

&lt;p&gt;As you move forward, there are several ways you can expand on the concepts and ideas covered in this guide. Here are a few suggestions:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Explore Advanced CDC Stream Configurations&lt;/strong&gt; : Dive deeper into the configuration options available with DMS CDC streams. Experiment with table mappings, filtering options, and advanced settings to tailor the replication process to your specific use cases.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Integrate Additional AWS Services&lt;/strong&gt; : Consider integrating other AWS services into your event-driven architecture. For example, you could explore using AWS Lambda to process the replicated events, Amazon S3 for data storage, or AWS Glue for data cataloging and ETL operations.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Implement Event-Driven Microservices&lt;/strong&gt; : Build event-driven microservices that consume the CDC stream events to trigger actions or updates across different systems. Explore how you can use services like AWS Step Functions or AWS EventBridge to orchestrate complex workflows based on the captured events.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Scale and Optimize&lt;/strong&gt; : Experiment with scaling and optimizing your CDC stream setup. Explore strategies for handling high-velocity data streams, optimizing performance, and implementing fault-tolerant architectures.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Monitor and Analyze&lt;/strong&gt; : Set up monitoring and analytics solutions to gain insights into your event-driven system. Utilize services like Amazon CloudWatch, AWS X-Ray, or AWS AppSync to track and analyze the performance, reliability, and usage patterns of your CDC stream and associated components.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;By expanding on the ideas presented in this guide, you can harness the full potential of a Change Data Capture stream in an event-driven architecture. This approach allows you to build scalable, real-time systems that react to changes in your data and drive intelligent decision-making in your applications. The possibilities for innovation and optimization are vast, so take this foundation and continue exploring the exciting world of event-driven architectures.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Blink and It's Gone: Embracing Ephemeral CDK Stacks for Efficient DevOps</title>
      <dc:creator>Matt Martz</dc:creator>
      <pubDate>Tue, 30 May 2023 18:27:45 +0000</pubDate>
      <link>https://forem.com/aws-builders/blink-and-its-gone-embracing-ephemeral-cdk-stacks-for-efficient-devops-534m</link>
      <guid>https://forem.com/aws-builders/blink-and-its-gone-embracing-ephemeral-cdk-stacks-for-efficient-devops-534m</guid>
      <description>&lt;p&gt;&lt;em&gt;I'm excited to announce that I'll be speaking at AWS Summit Washington, DC on June 8th, 2023, at 2:15PM (DEV206). My DevChat will discuss the benefits of ephemeral CDK Stacks for development workflows and CI/CD pipelines. If you're attending, I'd love to see you there and answer any questions you might have.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;This post serves as a supplement to my talk (spoilers), providing more insights into ephemeral CDK Stacks, their implementation, and best practices. If you're exploring the topic or haven't yet decided on attending the AWS Summit, I hope this post sparks your interest and encourages you to join me for an engaging discussion. See you there! 🎉&lt;/p&gt;

&lt;p&gt;In the fast-paced DevOps world, managing and cleaning up temporary cloud resources can be challenging. Forgotten stacks from testing or PoC stages lead to resource wastage and inflated cloud bills. To address this, ephemeral AWS CDK Stacks are a game changer.&lt;/p&gt;

&lt;p&gt;This blog post explores integrating ephemeral CDK Stacks into CI/CD pipelines and development workflows. Powered by Self-Destructing Constructs and CDK Aspects, they automate the cleanup process, ensuring a tidy AWS environment and resource savings.&lt;/p&gt;

&lt;p&gt;Ephemeral stacks are a simple yet powerful addition to your DevOps toolkit, streamlining resource management and reducing costs. Let's dive into leveraging ephemeral stacks, saving you from manual cleanup and the perils of forgotten stacks.&lt;/p&gt;

&lt;h2&gt;
  
  
  Self-Destructing Construct and CDK Aspects
&lt;/h2&gt;

&lt;p&gt;Ephemeral CDK Stacks are made possible by combining the concepts from two of my previous posts on Self-Destructing Constructs and CDK Aspects.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Self-Destructing Constructs&lt;/strong&gt; : As discussed in my previous post, &lt;a href="https://matt.martz.codes/say-goodbye-to-your-cdk-stacks-a-guide-to-self-destruction" rel="noopener noreferrer"&gt;Say Goodbye to Your CDK Stacks: A Guide to Self-Destruction&lt;/a&gt;, these are unique AWS CDK constructs that employ Step Functions to automatically delete the stack after a defined duration. This construct helps ensure that unnecessary resources aren't lingering around, reducing costs and freeing up space in your AWS account.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;CDK Aspects&lt;/strong&gt; : My last post, &lt;a href="https://dev.to/martzcodes/breaking-bad-practices-with-cdk-aspects-2776-temp-slug-5136472"&gt;Breaking Bad Practices with CDK Aspects&lt;/a&gt;, explained how CDK Aspects allow for implementing cross-cutting concerns across your stack, such as applying an aggressive removal policy to all resources. When paired with self-destructing stacks, this ensures a clean slate post-destruction, leaving no stray resources behind.&lt;/p&gt;

&lt;h2&gt;
  
  
  Integrating Ephemeral Stacks with CI/CD Pipelines
&lt;/h2&gt;

&lt;p&gt;CI/CD pipelines, a cornerstone of modern DevOps practices, can greatly benefit from ephemeral CDK Stacks. In a typical CI/CD pipeline, each code commit triggers a process that includes building, testing, and deploying the application. This often involves deploying a stack and, once the tests are executed, cleaning up the resources.&lt;/p&gt;

&lt;p&gt;However, sometimes stacks are left behind due to failures in the pipeline or prematurely terminated tests. These forgotten stacks lead to unnecessary costs and clutter. Ephemeral stacks resolve this by auto-deleting after a specific duration, ensuring a clean AWS environment and reducing the wastage of resources.&lt;/p&gt;

&lt;p&gt;To illustrate this, let's compare the sequence of events in a typical CI/CD pipeline and one enhanced with ephemeral stacks.&lt;/p&gt;

&lt;h3&gt;
  
  
  Typical CI/CD pipeline:
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffojsxph2xi4y258sckkw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffojsxph2xi4y258sckkw.png" width="800" height="474"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In the traditional CI/CD pipeline, the stack is explicitly deleted as part of the pipeline, which could fail or be skipped, resulting in lingering stacks.&lt;/p&gt;

&lt;h3&gt;
  
  
  CI/CD pipeline with Ephemeral Stacks:
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqle1gdg2oo981q6niai9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqle1gdg2oo981q6niai9.png" width="800" height="430"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;With ephemeral stacks, the stack deletion is automated and independent of the CI/CD pipeline, ensuring that stacks are always cleaned up.&lt;/p&gt;

&lt;p&gt;This approach reduces the complexity of your CI/CD pipeline and ensures cleaner resource management in your AWS environment. Additionally, you can set up an aggressive removal policy using CDK Aspects for a more comprehensive cleanup of all resources associated with the stack.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;It doesn't sound like a lot, but in practice, it can save hours (over the course of weeks) while ensuring AWS resources (and money) aren't being wasted. 🚀&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Implementing Ephemeral Stacks in Development Workflows
&lt;/h2&gt;

&lt;p&gt;Developers often deploy proof of concept (PoC) stacks to test new features, validate concepts, or debug issues. These stacks provide a sandboxed environment to experiment without affecting the production infrastructure. However, once the purpose is served, these stacks often become "forgotten" entities in the cloud environment. They no longer serve a meaningful purpose and, over time, contribute to clutter and unnecessary costs. Implementing ephemeral stacks in development workflows can be an excellent solution to this common oversight.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Problem of Forgotten Stacks
&lt;/h3&gt;

&lt;p&gt;In the energetic world of development work, the adrenaline rush of solving complex problems or moving onto the next exhilarating task can often eclipse the essential, albeit less exciting, cleanup step. This oversight, often spurred by the thrill of innovation 💡 or the satisfaction of squashing a bug 🐛, can lead to the pile-up of forgotten temporary stacks. Over time, these lingering stacks clutter your environment and inflate your cloud bill 💸.&lt;/p&gt;

&lt;p&gt;Consider some common scenarios where these temporary stacks crop up:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Proof of Concept Testing:&lt;/strong&gt; During the innovation process, PoC stacks are often crafted to assess feasibility or demonstrate the practicality of a concept. Upon approval or rejection of the concept, these stacks fulfill their purpose and, ideally, should vanish 🗑.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Feature Testing:&lt;/strong&gt; To ensure isolated and accurate testing, developers frequently deploy separate stacks for new features. Once validated and merged into the primary codebase, these stacks have served their purpose and become redundant 🔄.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Debugging:&lt;/strong&gt; In the process of troubleshooting intricate issues, developers may create replica stacks to isolate and understand the problem. After resolving the issue, these stacks lose their relevance and ought to be removed 🔎.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Let me illustrate this with a personal anecdote to drive the point home:&lt;/p&gt;

&lt;p&gt;In my role, I once assumed the responsibility of cleaning up unused stacks in our development accounts. On this particular day, I found myself navigating through and deleting over 200 of these forgotten stacks. As a serverless-first shop, this exercise didn't incur exorbitant costs, but it did consume resources and impacted our resource quotas . This served as a stark reminder of the importance of efficient stack management, and how easily these forgotten stacks can clutter our environment 🧹.&lt;/p&gt;

&lt;p&gt;Ephemeral stacks can be the silver bullet to these common oversights, introducing an automated cleanup process to ensure your development environment stays neat, tidy and cost-effective 💰.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Ephemeral Stack Solution
&lt;/h3&gt;

&lt;p&gt;Ephemeral stacks can serve as a fail-safe, ensuring that even if a developer forgets to delete a stack, it won't linger indefinitely. By setting a self-destruct timer at the time of stack creation, developers can rest easy knowing the stack will automatically clean itself up after a specific period.&lt;/p&gt;

&lt;p&gt;Here's how this can be integrated into the common scenarios mentioned above:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Proof of Concept Testing&lt;/strong&gt; : Set the stack to auto-delete after the meeting where the PoC will be demonstrated. This way, if the concept is rejected, the stack is automatically cleaned up. If the concept is approved, the stack can be manually preserved or redeployed as a more permanent resource.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Feature Testing&lt;/strong&gt; : Set a short lifespan for the stack perhaps a few hours or a day to allow for feature validation. After that period, the stack, if not needed, will self-destruct.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Debugging&lt;/strong&gt; : Given the unpredictable nature of debugging, a slightly longer lifespan could be set. Once the issue is resolved, if the developer forgets to clean up, the stack will still auto-delete after the set duration.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Integrating ephemeral stacks into your development workflow can lead to more efficient resource utilization, a cleaner cloud environment, and cost savings. It's a practical way to safeguard against human error without adding an extra burden on the developers.&lt;/p&gt;

&lt;h2&gt;
  
  
  Best Practices and Practical Insights
&lt;/h2&gt;

&lt;p&gt;Incorporating ephemeral stacks into your development workflow requires careful planning and adherence to best practices. Here are some insights to help you avoid common pitfalls and get the most out of self-destructing stacks.&lt;/p&gt;

&lt;h3&gt;
  
  
  Set a Reasonable Lifespan
&lt;/h3&gt;

&lt;p&gt;While the primary goal is to prevent forgotten stacks from lingering, setting the self-destruction timer too short might interrupt development or testing work. Consider the typical time required for the task at hand and add some buffer to determine the optimal lifespan.&lt;/p&gt;

&lt;h3&gt;
  
  
  Integrate with CI/CD Pipeline
&lt;/h3&gt;

&lt;p&gt;To get the most out of ephemeral stacks, they should be integrated with your CI/CD pipeline. This ensures that the stacks are created as part of your automated testing and deployment process and that they clean up after themselves when no longer needed.&lt;/p&gt;

&lt;h3&gt;
  
  
  Manage Permissions Carefully
&lt;/h3&gt;

&lt;p&gt;Remember, self-destructing stacks need permission to delete resources. Ensure that these permissions are granted judiciously, keeping in line with the principle of least privilege. Be especially cautious when dealing with production environments.&lt;/p&gt;

&lt;h3&gt;
  
  
  Clearly Label Ephemeral Stacks
&lt;/h3&gt;

&lt;p&gt;To avoid confusion and potential mishaps, it's important to clearly label ephemeral stacks. This could be through a naming convention or tagging. Clear labels also make it easier to locate and manage these stacks in the AWS Management Console.&lt;/p&gt;

&lt;h3&gt;
  
  
  Employ a Monitoring System
&lt;/h3&gt;

&lt;p&gt;While the self-destruction mechanism should work reliably, it's a good idea to have a monitoring system in place. This will alert you to any stacks that didn't delete as expected or if there are any issues with the self-destruction process.&lt;/p&gt;

&lt;h3&gt;
  
  
  Consider Exceptions
&lt;/h3&gt;

&lt;p&gt;Not every stack should be ephemeral. Some stacks may need to persist for longer periods for certain tasks, like long-running data processing or scenarios where manual intervention is required. Ensure that your system allows for such exceptions.&lt;/p&gt;

&lt;h3&gt;
  
  
  Educate Your Team
&lt;/h3&gt;

&lt;p&gt;Finally, make sure that your development team is fully aware of the ephemeral stack concept and its implications. They should understand the purpose, the lifespan of these stacks, and what they can expect during the self-destruction process.&lt;/p&gt;

&lt;p&gt;Implementing ephemeral stacks requires more than just technical setup it's a shift in the development mindset. With proper planning and adherence to these best practices, you can seamlessly integrate ephemeral stacks into your development workflow, leading to more efficient resource utilization and cost savings.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;In the realm of DevOps, resource management efficiency is critical, and ephemeral CDK Stacks provide a practical, automated solution to a common problem. By incorporating these self-destructing constructs into your CI/CD pipelines and development workflows, you can ensure the timely cleanup of temporary stacks, reducing resource wastage and maintaining a cleaner AWS environment.&lt;/p&gt;

&lt;p&gt;Ephemeral stacks offer a powerful combination of convenience and economy, alleviating developers from the burden of manual cleanup and saving valuable time and costs. They also guard against human error and oversight, a common cause of lingering, unnecessary cloud resources.&lt;/p&gt;

&lt;p&gt;Remember, the effective implementation of ephemeral stacks involves more than just technical setup. It also requires a shift in mindset, careful planning, and adherence to best practices. But with these in place, ephemeral stacks can become a vital part of your DevOps toolkit, promoting efficiency, tidiness, and cost-effectiveness in your cloud journey.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Breaking Bad Practices with CDK Aspects</title>
      <dc:creator>Matt Martz</dc:creator>
      <pubDate>Wed, 17 May 2023 13:58:16 +0000</pubDate>
      <link>https://forem.com/aws-builders/breaking-bad-practices-with-cdk-aspects-4nm0</link>
      <guid>https://forem.com/aws-builders/breaking-bad-practices-with-cdk-aspects-4nm0</guid>
      <description>&lt;p&gt;In the ever-evolving landscape of cloud infrastructure, AWS Cloud Development Kit (CDK) continues to stand as a groundbreaking tool, simplifying the process of defining cloud resources. Within this universe, Aspectsa feature within CDKhold a distinctive position. &lt;a href="https://docs.aws.amazon.com/cdk/v2/guide/aspects.html" rel="noopener noreferrer"&gt;Aspects&lt;/a&gt; act as autonomous agents within your CDK constructs, systematically traversing and applying consistent modifications.&lt;/p&gt;

&lt;p&gt;This article aims to dissect the intricacies of CDK Aspects, shedding light on their fundamental purpose, operation, and advanced use cases. Aspects, in many ways, embody the balance between consistency and flexibility in cloud infrastructure development, a concept that's growing increasingly important in this era of complex, scalable applications.&lt;/p&gt;

&lt;p&gt;Whether you are a seasoned AWS CDK user or a newcomer looking to expand your cloud development toolkit, this deep dive into CDK Aspects will provide valuable insights into this powerful feature. As we peel back the layers, you'll discover how Aspects can enhance resource management, improve security protocols, and promote code efficiency. Let's 'cook' up some knowledge on CDK Aspects.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2eyvfbu3i1jrgohz0q73.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2eyvfbu3i1jrgohz0q73.gif" width="245" height="138"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The example code for this repository is located here: &lt;a href="https://github.com/martzcodes/blog-aspects" rel="noopener noreferrer"&gt;https://github.com/martzcodes/blog-aspects&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This article by &lt;a href="https://hashnode.com/@JannikWempe" rel="noopener noreferrer"&gt;@JannikWempe&lt;/a&gt; is another great resource: &lt;a href="https://aws.hashnode.com/the-power-of-aws-cdk-aspects" rel="noopener noreferrer"&gt;https://aws.hashnode.com/the-power-of-aws-cdk-aspects&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Understanding CDK Aspects - The Basics
&lt;/h2&gt;

&lt;p&gt;At its core, the AWS Cloud Development Kit (CDK) is a software development framework that allows developers to define cloud infrastructure in code. This is where CDK Aspects come into play. Aspects are a feature within CDK that act like intelligent filters, traversing your code, identifying specific constructs, and applying modifications to them.&lt;/p&gt;

&lt;p&gt;Consider this simple TypeScript code snippet of an AWS S3 bucket defined using CDK:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import { Stack, StackProps } from 'aws-cdk-lib';
import { Bucket } from 'aws-cdk-lib/aws-s3';
import { Construct } from 'constructs';

export class MyCDKStack extends Stack {
  constructor(scope: Construct, id: string, props?: StackProps) {
    super(scope, id, props);

    new Bucket(this, 'MyBucket', {
      versioned: false,
    });
  }
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now, let's say you want to ensure that every S3 bucket in your CDK application has versioning enabled. With CDK Aspects you can do this in two ways.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Validation - Create a CDK Aspect to validate all Buckets in the Stack and throw an Error if it is not set&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Modification - Create a CDK Aspect to automatically set the versioned property on all buckets&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;In this context, your CDK code is like a complex chemistry experiment that needs careful management to prevent unwanted reactions.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Validation (the Chemistry Professor)&lt;/strong&gt;: In the validation role, the Chemistry Professor is like a vigilant observer, watching you perform your experiment. If they notice you're about to mix incompatible substances or your measurements are incorrect, they intervene immediately (throw an error) to prevent a potential disaster and ensure the effectiveness of your experiment.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Modification (the Chemistry Assistant)&lt;/strong&gt;: In the lab, an Assistant stands by to help with the experiment, offering a slight adjustment to ensure the reactions go as planned. They don't conduct the experiment for you but provide just enough assistance to keep you on track. This is akin to the modification role of Aspects, which scan your constructs and make slight but important tweaks to ensure consistency and conformity to standards.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;To draw the parallel back to our CDK Aspects, the Aspect could, like a Chemistry Assistant, automatically adjust certain aspects of your resources (like enabling versioning for all S3 buckets), ensuring your infrastructure maintains the proper 'formula' throughout its configuration.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwrjoinlybmsyksksj8cg.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwrjoinlybmsyksksj8cg.gif" width="480" height="268"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Here's how you might define that Aspect:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// For validation
export class ValidateVersioningAspect implements IAspect {
  public visit(node: IConstruct): void {
    if (node instanceof CfnBucket) {
      if (!node.versioningConfiguration
        || (!Tokenization.isResolvable(node.versioningConfiguration)
            &amp;amp;&amp;amp; node.versioningConfiguration.status !== 'Enabled')) {
              Annotations.of(node).addError('Bucket versioning is not enabled');
      }
    }
  }
}

const app = new App();
const stack = new MyCDKStack(app, 'MyStack');
Aspects.of(stack).add(new ValidateVersioningAspect());

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://github.com/martzcodes/blog-aspects/blob/main/lib/ValidateVersioningAspect.ts" rel="noopener noreferrer"&gt;In this code&lt;/a&gt;, the &lt;code&gt;ValidateVersioningAspect&lt;/code&gt; Aspect will add an error &lt;a href="https://docs.aws.amazon.com/cdk/api/v2/docs/aws-cdk-lib.assertions.Annotations.html" rel="noopener noreferrer"&gt;Annotation&lt;/a&gt; if it finds an S3 bucket with versioning disabled, ensuring that all S3 buckets comply with the requirement for versioning to be enabled. On synth, the error would look like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[Error at /MyTestStack/MyBucket/Resource] Bucket versioning is not enabled

Found errors

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://github.com/martzcodes/blog-aspects/blob/main/test/validate.test.ts" rel="noopener noreferrer"&gt;Annotations can also be tested&lt;/a&gt; and have support from CDK's built-in assertions: &lt;a href="https://docs.aws.amazon.com/cdk/api/v2/docs/aws-cdk-lib.assertions.Annotations.html" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/cdk/api/v2/docs/aws-cdk-lib.assertions.Annotations.html&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// For modification
export class EnableVersioningAspect implements IAspect {
  public visit(node: IConstruct): void {
    if (node instanceof CfnBucket) {
      if (!node.versioningConfiguration
        || (!Tokenization.isResolvable(node.versioningConfiguration)
            &amp;amp;&amp;amp; node.versioningConfiguration.status !== 'Enabled')) {
              node.versioningConfiguration = {
                status: 'Enabled'
              };
      }
    }
  }
}

const app = new App();
const stack = new MyCDKStack(app, 'MyStack');
Aspects.of(stack).add(new EnableVersioningAspect());

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://github.com/martzcodes/blog-aspects/blob/main/lib/EnableVersioningAspect.ts" rel="noopener noreferrer"&gt;In this code&lt;/a&gt;, the EnableVersioningAspect class defines an Aspect that will "visit" every construct in the stack. If the construct is an instance of the Bucket class, the Aspect will set its versioned property to true, effectively enabling versioning for every bucket in the stack.&lt;/p&gt;

&lt;h2&gt;
  
  
  Deep Dive into CDK Aspects
&lt;/h2&gt;

&lt;p&gt;In this section, we'll delve deeper into the inner workings of CDK Aspects and uncover the magic behind the scenes. We'll explain the crucial role of the &lt;code&gt;visit&lt;/code&gt; method, discuss the &lt;code&gt;Aspects.of(scope).add(aspect)&lt;/code&gt; pattern, and illustrate how Aspects interact with the CDK's synthesis process. When using Aspects, it's crucial to be aware of how the AWS CDK uses &lt;em&gt;tokenization&lt;/em&gt; to manage resources' properties.&lt;/p&gt;

&lt;h3&gt;
  
  
  The &lt;code&gt;visit&lt;/code&gt; Method
&lt;/h3&gt;

&lt;p&gt;At the heart of any CDK Aspect is the &lt;code&gt;visit&lt;/code&gt; method. As an implementer of the &lt;code&gt;IAspect&lt;/code&gt; interface, this method is called when the Aspect is applied to a construct. The &lt;code&gt;visit&lt;/code&gt; method takes a single argument&lt;code&gt;IConstruct&lt;/code&gt;which is the construct the Aspect is visiting. What you do inside this method is the meat of your Aspect: whether you choose to throw an error for validation, or modify a property of the construct.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Intricacies of Tokenization in AWS CDK
&lt;/h3&gt;

&lt;p&gt;Tokens are placeholders used by the CDK to represent values that are not known until deployment time. For example, if you create an S3 bucket without specifying a bucket name, CDK generates a unique name and represents it with a token in your code.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwr6c04tu00qvry744dsj.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwr6c04tu00qvry744dsj.gif" width="500" height="283"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;When you inspect the &lt;code&gt;bucketName&lt;/code&gt; property during the &lt;code&gt;visit&lt;/code&gt; method, you might expect to see an actual bucket name. However, you'll instead see a token, something like &lt;code&gt;${Token[TOKEN.12]}&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;The tokenization system can lead to unexpected results when using Aspects. For instance, if you attempt to modify a property that uses a token, your Aspect might not behave as expected. This is because tokens aren't resolved until the CDK synthesizes your app into a CloudFormation template.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/martzcodes/blog-aspects/blob/main/lib/TokenAwareAspect.ts" rel="noopener noreferrer"&gt;Here's an example&lt;/a&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;export class TokenAwareAspect implements IAspect {
  visit(node: IConstruct): void {
    if (node instanceof Bucket) {
      console.log(`Bucket name is ${node.bucketName}`);
    }
  }
}

const app = new App();
const stack = new Stack(app, 'MyStack');
new Bucket(stack, 'MyBucket');
Aspects.of(stack).add(new TokenAwareAspect());

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In the console output, you'll see a token as the bucket name, not a real bucket name. Keep this in mind when designing your Aspects!&lt;/p&gt;

&lt;h3&gt;
  
  
  Applying Aspects
&lt;/h3&gt;

&lt;p&gt;The &lt;code&gt;Aspects.of(scope).add(aspect)&lt;/code&gt; pattern is the standard way to apply an Aspect to a construct. In this pattern, &lt;code&gt;Aspects.of(scope)&lt;/code&gt; returns an &lt;code&gt;Aspects&lt;/code&gt; object associated with a construct, and &lt;code&gt;add(aspect)&lt;/code&gt; adds an Aspect to this object. The &lt;code&gt;scope&lt;/code&gt; here could be any construct to which you want to apply the Aspecttypically an instance of a Stack or an App.&lt;/p&gt;

&lt;h3&gt;
  
  
  Aspects and the Synthesis Process
&lt;/h3&gt;

&lt;p&gt;CDK Aspects play a crucial role during the CDK's synthesis process. The synthesis process is a multi-stage operation where CDK translates your code into a CloudFormation template, which AWS can understand. During this process, Aspects are invoked after the construct tree has been fully initialized, but before synthesis. This allows Aspects to validate or modify constructs right before the CloudFormation templates are generated.&lt;/p&gt;

&lt;p&gt;Just as Skyler White had to understand the sequence of money laundering, let's delve deeper into the sequence of CDK Aspects with a diagram.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sequenceDiagram
    participant User
    participant CDK App
    participant Aspect
    participant CloudFormation
    User-&amp;gt;&amp;gt;CDK App: Runs CDK Synth
    CDK App-&amp;gt;&amp;gt;CDK App: Initializes Construct Tree
    CDK App-&amp;gt;&amp;gt;Aspect: Invokes Aspects
    Aspect--&amp;gt;&amp;gt;CDK App: Validates/Modifies Constructs
    CDK App-&amp;gt;&amp;gt;CloudFormation: Generates CloudFormation Template

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjbf2tf7i0m19wbpparb9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjbf2tf7i0m19wbpparb9.png" width="800" height="412"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  CDK Aspects in Action: An Architecture Diagram Generator
&lt;/h2&gt;

&lt;p&gt;In this section, we'll explore a concrete example of using CDK Aspects in a real-world scenario. We'll delve into the internals of a recently published npm library, &lt;a href="https://github.com/aws-community-projects/arch-dia" rel="noopener noreferrer"&gt;&lt;code&gt;@aws-community/arch-dia&lt;/code&gt;&lt;/a&gt;, which uses a CDK Aspect to generate a pseudo-architecture diagram of a project. Not only does it visualize your AWS infrastructure, but it also tracks changes between synthesis stages, providing a visual diff.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Architecture Diagram Aspect
&lt;/h3&gt;

&lt;p&gt;The key component in &lt;a href="https://github.com/aws-community-projects/arch-dia" rel="noopener noreferrer"&gt;&lt;code&gt;@aws-community/arch-dia&lt;/code&gt;&lt;/a&gt; is the &lt;a href="https://github.com/aws-community-projects/arch-dia/blob/main/src/architecture-diagram.ts" rel="noopener noreferrer"&gt;&lt;code&gt;ArchitectureDiagramAspect&lt;/code&gt;&lt;/a&gt;, an implementation of the &lt;code&gt;IAspect&lt;/code&gt; interface. This Aspect traverses the constructs in a given Stack to generate a Mermaid diagram representing the architecture of your AWS resources. Here's an overview of the code:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;export class ArchitectureDiagramAspect implements IAspect {
  private readonly mermaidDiagram: string[];
  private stackName = '';

  constructor () {
    this.mermaidDiagram = [];
  }

  visit (node: IConstruct): void {
    if (node instanceof Stack) {
      this.stackName = node.stackName;
      this.traverseConstruct(node, '');
    }
  }
  ...
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This Aspect, like all Aspects, has a &lt;code&gt;visit&lt;/code&gt; method. It checks if the visited construct is an instance of the Stack class. If it is, it initiates a traversal of the constructs in that stack.&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;traverseConstruct&lt;/code&gt; method iteratively visits all children of a given construct, building up a Mermaid diagram string in the process:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;private traverseConstruct (construct: IConstruct, parentPath: string): void {
  ...
  construct.node.children.forEach((child) =&amp;gt; {
    this.traverseConstruct(child, currentPath);
  });
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Generating and Comparing Diagrams
&lt;/h3&gt;

&lt;p&gt;Once all constructs have been visited, the Aspect can generate a Mermaid diagram of the entire Stack using the &lt;code&gt;generateDiagram&lt;/code&gt; method. This method also handles comparing the newly generated diagram with the previous one, if it exists, to create a visual diff:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;generateDiagram (): string {
  ...
  const addedElements = [...newElements].filter((e) =&amp;gt; !oldElements.has(e));
  const removedElements = [...oldElements].filter((e) =&amp;gt; !newElements.has(e));
  const added = this.mermaidDiagram.filter((line) =&amp;gt; !old.includes(line));
  const removed = old.filter((line) =&amp;gt; !this.mermaidDiagram.includes(line));
  ...
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This visual diff highlights the changes between the old and new architectures, providing a clear visualization of how your resources have evolved.&lt;/p&gt;

&lt;p&gt;By traversing the constructs in a Stack, &lt;code&gt;@aws-community/arch-dia&lt;/code&gt; can generate a visual representation of your AWS resources and track changes over time. This not only aids in understanding and documenting your infrastructure but can also serve as a powerful tool for communicating changes to stakeholders.&lt;/p&gt;

&lt;h2&gt;
  
  
  Best Practices and Tips: Using CDK Aspects Effectively
&lt;/h2&gt;

&lt;p&gt;Once you've got a handle on the basics of CDK Aspects, here are a few additional best practices and tips to help you use them more effectively.&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Use Aspects for Cross-Cutting Concerns
&lt;/h3&gt;

&lt;p&gt;CDK Aspects are ideal for applying changes or enforcing rules that cut across different layers or types of resources in your infrastructure. Consider using Aspects when you want to apply a consistent policy or setting across multiple resources, especially when they are of different types.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Be Cognizant of the Construct Tree Traversal
&lt;/h3&gt;

&lt;p&gt;CDK Aspects traverse the construct tree using a depth-first approach. This means that the &lt;code&gt;visit&lt;/code&gt; method is invoked on a construct only after it has been invoked on all of its children. In certain scenarios, you may need to be aware of this order of traversal to achieve the desired results.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Account for CDK Tokenization
&lt;/h3&gt;

&lt;p&gt;As we discussed earlier, the CDK uses tokenization to handle values that aren't known until deployment time. Be aware of this while designing your Aspects, especially when inspecting or modifying properties that might be tokenized. If necessary, consider using the &lt;code&gt;Token.isUnresolved&lt;/code&gt; method to check if a value is a token.&lt;/p&gt;

&lt;h3&gt;
  
  
  4. Avoid Making Changes Outside the Visit Method
&lt;/h3&gt;

&lt;p&gt;The &lt;code&gt;visit&lt;/code&gt; method is the only place where you should make changes to constructs when using Aspects. While it might be technically possible to modify constructs outside this method, doing so can lead to unexpected behavior and hard-to-debug issues.&lt;/p&gt;

&lt;h3&gt;
  
  
  5. Test Your Aspects
&lt;/h3&gt;

&lt;p&gt;As with any code, you should thoroughly test your Aspects. Given that Aspects can modify constructs across your app, a small error in an Aspect can have a broad impact. Consider using the AWS CDK's built-in testing tools, like the &lt;code&gt;aws-cdk-lib/assertions&lt;/code&gt; library, to write unit tests for your Aspects.&lt;/p&gt;

&lt;p&gt;My example code includes tests for the aspects: &lt;a href="https://github.com/martzcodes/blog-aspects/tree/main/test" rel="noopener noreferrer"&gt;https://github.com/martzcodes/blog-aspects/tree/main/test&lt;/a&gt; as does the &lt;a href="https://github.com/aws-community-projects/arch-dia" rel="noopener noreferrer"&gt;&lt;code&gt;@aws-community/arch-dia&lt;/code&gt;&lt;/a&gt; library&lt;/p&gt;

&lt;p&gt;CDK Aspects offer a powerful way to enforce consistency and automate modifications across your cloud infrastructure code. With these best practices and tips, you'll be well-equipped to use Aspects effectively in your AWS CDK applications. 🚀&lt;/p&gt;

&lt;h2&gt;
  
  
  Wrap Up
&lt;/h2&gt;

&lt;p&gt;To wrap it up, here are some final thoughts:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Harnessing the Power of Aspects&lt;/strong&gt; : CDK Aspects are a powerful tool for AWS developers, offering a way to automate cross-cutting concerns and enforce consistency across an entire application. While they may seem complex at first, understanding how they work and how to use them effectively can greatly enhance your AWS CDK toolkit.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Exploration and Experimentation&lt;/strong&gt; : Don't be afraid to explore and experiment with Aspects. Whether you're trying to create an architecture diagram generator or a recursive Aspect, there's a lot of potential for creative and effective solutions.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Caution and Diligence&lt;/strong&gt; : Despite their power, it's important to be cautious when working with Aspects. Be aware of potential pitfalls, such as tokenization and the performance implications of using Aspects. Always test your Aspects thoroughly to avoid introducing broad-reaching errors into your application.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8ybm9aybgw7fcvoqpazy.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8ybm9aybgw7fcvoqpazy.gif" width="480" height="264"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In the end, CDK Aspects can be your 'Heisenberg' in managing cloud infrastructure - they have the potential to be influential, powerful, and transforming. They provide a way to simplify and automate many tasks that would otherwise require manual, error-prone work.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Say Goodbye to Your CDK Stacks: A Guide to Self-Destruction</title>
      <dc:creator>Matt Martz</dc:creator>
      <pubDate>Wed, 22 Feb 2023 20:29:48 +0000</pubDate>
      <link>https://forem.com/aws-builders/say-goodbye-to-your-cdk-stacks-a-guide-to-self-destruction-31ni</link>
      <guid>https://forem.com/aws-builders/say-goodbye-to-your-cdk-stacks-a-guide-to-self-destruction-31ni</guid>
      <description>&lt;p&gt;Are you tired of constantly managing your CDK Stacks and dealing with the associated costs? If so, self-destructing CDK Stacks might be the solution you've been looking for. With the ability to automatically delete themselves after a set time, these stacks can help free up resources and streamline your development process.&lt;/p&gt;

&lt;p&gt;In this guide, we'll show you how to set up self-destructing CDK Stacks and integrate them into your CI/CD pipeline. By doing so, you can reduce costs and improve the efficiency of your development process. We'll also share some best practices and tips to help you make the most out of this feature. So, if you're ready to optimize your development process, read on to learn how to implement self-destructing CDK Stacks! 🤯&lt;/p&gt;

&lt;p&gt;Code: &lt;a href="https://github.com/martzcodes/blog-cdk-self-destruct" rel="noopener noreferrer"&gt;https://github.com/martzcodes/blog-cdk-self-destruct&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What Will We Make?
&lt;/h2&gt;

&lt;p&gt;We'll create a Step Function that will be executed during the deployment of a Stack and will wait for a specified period of time. Since Step Functions are charged based on state transitions, and not the duration of the run, this will not result in additional costs. Additionally, Standard Step Functions can run for up to a year, providing us with plenty of flexibility. Once the Wait period is over, the Step Function will use the AWS SDK to automatically delete the Stack. 🗑️&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqpueq1vu5rmx9u6cjh2a.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqpueq1vu5rmx9u6cjh2a.png" alt=" " width="654" height="986"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Creating a SelfDestruct Construct
&lt;/h2&gt;

&lt;p&gt;We're going to get started by creating a new CDK Construct that can be used in any project. The only thing this Construct will need as a property input will be the Duration that we want the stack to destroy itself after.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="kr"&gt;interface&lt;/span&gt; &lt;span class="nx"&gt;SelfDestructProps&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nl"&gt;duration&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;Duration&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="kd"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;SelfDestruct&lt;/span&gt; &lt;span class="kd"&gt;extends&lt;/span&gt; &lt;span class="nc"&gt;Construct&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nf"&gt;constructor&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;scope&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;Construct&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;id&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kr"&gt;string&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;props&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;SelfDestructProps&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;super&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;scope&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;id&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;duration&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;props&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;From here, we're going to want the Step Function to handle a few things. It should:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Re-execute the Step Function on every Stack deployment&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Close out old executions on new deployments (only have one execution running at any given time)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Wait for a pre-defined duration&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Delete the Stack after the Wait period&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  List Already Running Step Functions
&lt;/h3&gt;

&lt;p&gt;First, we need to get the list of running executions of this Step Function. We can do that with the &lt;a href="https://docs.aws.amazon.com/step-functions/latest/apireference/API_ListExecutions.html" rel="noopener noreferrer"&gt;&lt;code&gt;states:ListExecutions&lt;/code&gt;&lt;/a&gt; SDK Command.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;listExecutions&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;CallAwsService&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s2"&gt;`ListExecutions`&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="na"&gt;action&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;listExecutions&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;iamAction&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;states:ListExecutions&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;iamResources&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;*&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
  &lt;span class="na"&gt;parameters&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;StateMachineArn.$&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;$$.StateMachine.Id&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;StatusFilter&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;RUNNING&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="p"&gt;},&lt;/span&gt;
  &lt;span class="na"&gt;service&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;sfn&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;🏃‍♂️We pass in the &lt;code&gt;StatusFilter: "RUNNING"&lt;/code&gt; to make sure we only get back executions that are still in the RUNNING state. Typically there should only be one of these (from the last deployment).&lt;/p&gt;

&lt;h3&gt;
  
  
  Stop Other Executions
&lt;/h3&gt;

&lt;p&gt;Next we'll want to &lt;code&gt;Map&lt;/code&gt; over the returned Executions. &lt;code&gt;Map&lt;/code&gt;s are Step Function for-loops, effectively.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;executionsMap&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Map&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s2"&gt;`ExecutionsMap`&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="na"&gt;inputPath&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;$.Executions&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In this loop, we're going to want to make sure that the execution isn't going to kill itself (not yet at least). We do this by checking the map item's Execution Id versus the current running execution's Arn:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;stopExecution&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;CallAwsService&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s2"&gt;`StopExecution`&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="na"&gt;action&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;stopExecution&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;iamAction&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;states:StopExecution&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;iamResources&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;*&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
  &lt;span class="na"&gt;parameters&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="na"&gt;Cause&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Superceded&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;ExecutionArn.$&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;$.ExecutionArn&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="p"&gt;},&lt;/span&gt;
  &lt;span class="na"&gt;service&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;sfn&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;

&lt;span class="nx"&gt;executionsMap&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;iterator&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
  &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Choice&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;NotSelf?&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;when&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
      &lt;span class="nx"&gt;Condition&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;not&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
        &lt;span class="nx"&gt;Condition&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;stringEqualsJsonPath&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;$.ExecutionArn&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;$$.Execution.Id&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
      &lt;span class="p"&gt;),&lt;/span&gt;
      &lt;span class="nx"&gt;stopExecution&lt;/span&gt;
    &lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;otherwise&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Pass&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;self&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
&lt;span class="p"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;$.ExecutionArn&lt;/code&gt; refers to the mapped execution's item and &lt;code&gt;$$.Execution.Id&lt;/code&gt; refers to the Step Function itself... that is &lt;code&gt;$$&lt;/code&gt; is an escape to "top-level".&lt;/p&gt;

&lt;h3&gt;
  
  
  Check and Wait to Delete
&lt;/h3&gt;

&lt;p&gt;Next, we can check the State Machine to make sure this resource isn't invoking because of a Stack that is already destroying itself. If it is, we can exit. This is actually very nice because since we just killed the other executions, we're tying up loose ends from previous deployments by making sure that there won't be any executions running.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;wait&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Wait&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Wait&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="na"&gt;time&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;WaitTime&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;duration&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;duration&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;wasDelete&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Choice&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;WasDelete?&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
  &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;when&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="nx"&gt;Condition&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;stringEquals&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;$$.Execution.Input.Action&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Delete&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
    &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Succeed&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;DeleteSuccess&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
  &lt;span class="p"&gt;)&lt;/span&gt;
  &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;otherwise&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;wait&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As part of this, we end up Waiting the duration we set. This could be anywhere from seconds to days (up to 1 year).&lt;/p&gt;

&lt;p&gt;After the Wait is over, we need to delete the stack:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;deleteStack&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;CallAwsService&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s2"&gt;`DeleteStack`&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="na"&gt;action&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;deleteStack&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;iamAction&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;cloudformation:DeleteStack&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;iamResources&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;*&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
  &lt;span class="na"&gt;parameters&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;StackName.$&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;$$.Execution.Input.StackName&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="p"&gt;},&lt;/span&gt;
  &lt;span class="na"&gt;service&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;cloudformation&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is done by an AWS SDK Call &lt;a href="https://docs.aws.amazon.com/AWSCloudFormation/latest/APIReference/API_DeleteStack.html" rel="noopener noreferrer"&gt;&lt;code&gt;cloudformation:DeleteStack&lt;/code&gt;&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Creating the State Machine
&lt;/h3&gt;

&lt;p&gt;With all the steps created, we can tie them together to create the actual Step Function:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;finished&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Succeed&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s2"&gt;`Finished`&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="nx"&gt;listExecutions&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;next&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;executionsMap&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="nx"&gt;executionsMap&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;next&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;wasDelete&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="nx"&gt;wait&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;next&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;deleteStack&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="nx"&gt;deleteStack&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;next&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;finished&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;sm&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;StateMachine&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s2"&gt;`SelfDestructMachine`&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="na"&gt;definition&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;listExecutions&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Running the Step Function with Every Deployment
&lt;/h3&gt;

&lt;p&gt;This construct is only useful if it is consistently run with Stack Deployments. So, let's add a Custom Resource that executes the Step Function as part of the Deployment. We can do this with an &lt;a href="https://docs.aws.amazon.com/cdk/api/v2/docs/aws-cdk-lib.custom_resources.AwsCustomResource.html" rel="noopener noreferrer"&gt;&lt;code&gt;AwsCustomResource&lt;/code&gt;&lt;/a&gt; construct:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;AwsCustomResource&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s2"&gt;`SelfDestructCR`&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="na"&gt;onCreate&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="na"&gt;action&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;startExecution&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;parameters&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="na"&gt;input&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;JSON&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;stringify&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
        &lt;span class="na"&gt;Action&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Create&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="na"&gt;StackArn&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;Stack&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="k"&gt;of&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nx"&gt;stackId&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="na"&gt;StackName&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;Stack&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="k"&gt;of&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nx"&gt;stackName&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="p"&gt;}),&lt;/span&gt;
      &lt;span class="na"&gt;stateMachineArn&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;sm&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;stateMachineArn&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="p"&gt;},&lt;/span&gt;
    &lt;span class="na"&gt;physicalResourceId&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;PhysicalResourceId&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="k"&gt;of&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;SelfDestructCR&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
    &lt;span class="na"&gt;service&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;StepFunctions&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="p"&gt;},&lt;/span&gt;
  &lt;span class="na"&gt;onDelete&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="na"&gt;action&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;startExecution&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;parameters&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="na"&gt;input&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;JSON&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;stringify&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
        &lt;span class="na"&gt;Action&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Delete&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="na"&gt;StackArn&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;Stack&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="k"&gt;of&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nx"&gt;stackId&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="na"&gt;StackName&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;Stack&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="k"&gt;of&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nx"&gt;stackName&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="p"&gt;}),&lt;/span&gt;
      &lt;span class="na"&gt;stateMachineArn&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;sm&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;stateMachineArn&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="p"&gt;},&lt;/span&gt;
    &lt;span class="na"&gt;physicalResourceId&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;PhysicalResourceId&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="k"&gt;of&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;SelfDestructCR&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
    &lt;span class="na"&gt;service&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;StepFunctions&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="p"&gt;},&lt;/span&gt;
  &lt;span class="na"&gt;onUpdate&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="na"&gt;action&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;startExecution&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;parameters&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="na"&gt;input&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;JSON&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;stringify&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
        &lt;span class="na"&gt;Action&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Update&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="na"&gt;Version&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Date&lt;/span&gt;&lt;span class="p"&gt;().&lt;/span&gt;&lt;span class="nf"&gt;getTime&lt;/span&gt;&lt;span class="p"&gt;().&lt;/span&gt;&lt;span class="nf"&gt;toString&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt;
        &lt;span class="na"&gt;StackArn&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;Stack&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="k"&gt;of&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nx"&gt;stackId&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="na"&gt;StackName&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;Stack&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="k"&gt;of&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nx"&gt;stackName&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="p"&gt;}),&lt;/span&gt;
      &lt;span class="na"&gt;stateMachineArn&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;sm&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;stateMachineArn&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="p"&gt;},&lt;/span&gt;
    &lt;span class="na"&gt;physicalResourceId&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;PhysicalResourceId&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="k"&gt;of&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;SelfDestructCR&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
    &lt;span class="na"&gt;service&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;StepFunctions&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="p"&gt;},&lt;/span&gt;
  &lt;span class="na"&gt;policy&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;AwsCustomResourcePolicy&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;fromSdkCalls&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
    &lt;span class="na"&gt;resources&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;sm&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;stateMachineArn&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
  &lt;span class="p"&gt;}),&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;When the Stack deploys it makes a different SDK call based on the type of Stack operation (Create, Update, Delete). Custom Resources only execute when input parameters change. &lt;code&gt;onCreate&lt;/code&gt; and &lt;code&gt;onDelete&lt;/code&gt; are considered "new" since the stack is being created or destroyed, but in order to make sure the &lt;code&gt;onUpdate&lt;/code&gt; call happens we have to touch an input parameter within it. That's why we set the &lt;code&gt;Version&lt;/code&gt; to the current time.&lt;/p&gt;

&lt;h2&gt;
  
  
  Tips for Self-Destruction
&lt;/h2&gt;

&lt;p&gt;💡 Did you notice that the code above didn't explicitly set any IAM permissions? CDK + Step Functions handle all of that for you. By defining the action, and iamActions / services as part of the Step and AwsCustomResource constructs CDK automatically infers IAM permissions and make sure those are attached to the Resources so that they have access to perform their functions!&lt;/p&gt;

&lt;h3&gt;
  
  
  Creating a DeveloperStack
&lt;/h3&gt;

&lt;p&gt;For a better DevEx you could create a standardized Stack template that includes the self-destruct Construct by default. For example, you could publish &lt;code&gt;BlogCdkSelfDestructStack&lt;/code&gt; as your common stack in an npm library:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="kd"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;BlogCdkSelfDestructStack&lt;/span&gt; &lt;span class="kd"&gt;extends&lt;/span&gt; &lt;span class="nc"&gt;cdk&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;Stack&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nf"&gt;constructor&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;scope&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;Construct&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;id&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kr"&gt;string&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;props&lt;/span&gt;&lt;span class="p"&gt;?:&lt;/span&gt; &lt;span class="nx"&gt;cdk&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;StackProps&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;super&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;scope&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;props&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

    &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;SelfDestruct&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s2"&gt;`SelfDestruct`&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="na"&gt;duration&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;Duration&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;minutes&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;3&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
    &lt;span class="p"&gt;});&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;When teams create new projects, instead of creating a stack and basing it off of &lt;code&gt;cdk.Stack&lt;/code&gt; .... they would base it off of &lt;code&gt;BlogCdkSelfDestructStack&lt;/code&gt; which has self-destruction built in!&lt;/p&gt;

&lt;h3&gt;
  
  
  Automatically Detecting Temporary Stacks
&lt;/h3&gt;

&lt;p&gt;Clearly you don't want your production stacks to delete themselves. Another tip would be to introduce a property into your Base stack that indicates whether it should self-destruct or not. You could do this by stack naming conventions, or have a developer or CI/CD property. For example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="kr"&gt;interface&lt;/span&gt; &lt;span class="nx"&gt;BlogCdkSelfDestructStackProps&lt;/span&gt; &lt;span class="kd"&gt;extends&lt;/span&gt; &lt;span class="nx"&gt;cdk&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;StackProps&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nl"&gt;cicd&lt;/span&gt;&lt;span class="p"&gt;?:&lt;/span&gt; &lt;span class="nx"&gt;boolean&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="nl"&gt;developer&lt;/span&gt;&lt;span class="p"&gt;?:&lt;/span&gt; &lt;span class="nx"&gt;boolean&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="nl"&gt;production&lt;/span&gt;&lt;span class="p"&gt;?:&lt;/span&gt; &lt;span class="nx"&gt;boolean&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="kd"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;BlogCdkSelfDestructStack&lt;/span&gt; &lt;span class="kd"&gt;extends&lt;/span&gt; &lt;span class="nc"&gt;cdk&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;Stack&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nf"&gt;constructor&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;scope&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;Construct&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;id&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kr"&gt;string&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;props&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;BlogCdkSelfDestructStackProps&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;super&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;scope&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;props&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;cicd&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;developer&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;production&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;props&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

    &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;developer&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nx"&gt;production&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="k"&gt;throw&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Error&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Don't use developer stacks in production&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;

    &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;!&lt;/span&gt;&lt;span class="nx"&gt;production&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;developer&lt;/span&gt; &lt;span class="o"&gt;||&lt;/span&gt; &lt;span class="nx"&gt;cicd&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;SelfDestruct&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s2"&gt;`SelfDestruct`&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="na"&gt;duration&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;Duration&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;minutes&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;3&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
      &lt;span class="p"&gt;});&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And then in your bin file, you would pass in the appropriate properties (which could come from node config, environment variables / etc. &lt;a href="https://docs.aws.amazon.com/cdk/v2/guide/best-practices.html" rel="noopener noreferrer"&gt;CDK Best Practices&lt;/a&gt; recommend &lt;code&gt;Configure with properties and methods, not environment variables&lt;/code&gt; which is why you would place these properties into your bin file.&lt;/p&gt;

&lt;p&gt;Many CI/CD systems have pre-defined system environment variables, and those could be used to automatically detect the CI/CD for self-destruction. For example you could create a namespaced Stack that gets deployed as part of an automated PR integration check. Then succeed or fail the stack would automatiaclly clean up after itself without CI/CD having to do it.&lt;/p&gt;

&lt;h3&gt;
  
  
  Can I Extend the Wait Without Re-Deploying?
&lt;/h3&gt;

&lt;p&gt;Absolutely! Simply re-execute the Step Function! This will reset the timer giving you more time if you need it.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;And just like that, you're a self-destructing CDK Stack pro! You can now confidently say "adios" to stacks that are taking up too much space and draining your resources.&lt;/p&gt;

&lt;p&gt;With this newfound knowledge, you can save on infrastructure costs and keep your AWS account looking fresh and tidy. Plus, you'll have the satisfaction of knowing that you're incorporating a little excitement and danger into your development process.&lt;/p&gt;

&lt;p&gt;Just remember, with great power comes great responsibility. Be sure to set a reasonable Wait period and test your code thoroughly before deploying. And don't worry, we won't tell anyone if you shed a tear or two as your stacks go boom.&lt;/p&gt;

</description>
      <category>codenewbie</category>
      <category>mentorship</category>
      <category>community</category>
      <category>career</category>
    </item>
    <item>
      <title>Improving a Serverless App To Cross-Post Blogs</title>
      <dc:creator>Matt Martz</dc:creator>
      <pubDate>Tue, 21 Feb 2023 15:44:39 +0000</pubDate>
      <link>https://forem.com/aws-builders/improving-a-serverless-app-to-cross-post-blogs-1c9</link>
      <guid>https://forem.com/aws-builders/improving-a-serverless-app-to-cross-post-blogs-1c9</guid>
      <description>&lt;p&gt;Allen Helton is an AWS Hero and absolute LEGEND. In December he wrote a post titled "&lt;a href="%5Bhttps://www.readysetcloud.io/blog/allen.helton/how-i-built-a-serverless-automation-to-cross-post-my-blogs/%5D(https://www.readysetcloud.io/blog/allen.helton/how-i-built-a-serverless-automation-to-cross-post-my-blogs/)"&gt;&lt;strong&gt;I Built a Serverless App To Cross-Post My Blogs&lt;/strong&gt;&lt;/a&gt;&lt;strong&gt;"&lt;/strong&gt; and after some begging from some &lt;a href="https://aws.amazon.com/developer/community/community-builders/" rel="noopener noreferrer"&gt;AWS Community Builders&lt;/a&gt; he published his code to our shiny new &lt;a href="https://github.com/aws-community-projects/blog-crossposting-automation" rel="noopener noreferrer"&gt;AWS Community Projects&lt;/a&gt; GitHub Org.&lt;/p&gt;

&lt;p&gt;Allen is quite a prolific writer and publishes his articles in (at least) four places. He has a self-hosted static blog built with Amplify using Hugo, as well as using &lt;a href="http://dev.to"&gt;dev.to&lt;/a&gt;, hashnode, and medium. His self-hosted blog on his personal domain is his primary platform and &lt;a href="http://dev.to"&gt;dev.to&lt;/a&gt;, hashnode, and medium all get canonical URLs for SEO purposes. 🌟&lt;/p&gt;

&lt;p&gt;While Allen's code is great, it does have some &lt;a href="https://github.com/aws-community-projects/blog-crossposting-automation#limitations" rel="noopener noreferrer"&gt;limitations&lt;/a&gt;. For instance, it's written using SAM/yaml, requires a Hugo/Amplify built blog, effectively has no optional features, and he still manually uploads image assets to S3 for all of his articles. 😱&lt;/p&gt;

&lt;p&gt;In this article, we'll go over my fork of Allen's code, where I have:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Converted the project to use AWS CDK&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Made Hugo/Amplify and most of the other platforms optional 🚀&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Added a direct (private) GitHub webhook integration&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Automatically parsed images committed to GitHub and uploaded them to a public S3 Bucket (and updated the content to use them)&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I'm excited to share these improvements with you and hope you find them useful!&lt;/p&gt;

&lt;p&gt;Code: &lt;a href="https://github.com/martzcodes/blog-crossposting-automation" rel="noopener noreferrer"&gt;https://github.com/martzcodes/blog-crossposting-automation&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Converting the Project to CDK
&lt;/h2&gt;

&lt;p&gt;Let's talk about converting projects from a format, like SAM, to CDK. It can be a bit tricky, but the easiest way is to focus on the architecture. Get the architecture skeleton right first, and everything else will fall into place. 💪&lt;/p&gt;

&lt;p&gt;So, looking at Allen's project structure, we can see that he has one DynamoDB table, five lambda functions, and a step function. One lambda is triggered by an Amplify EventBridge Event. That lambda then triggers a step function where the other four lambdas are used. 🤓&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgeap7rxkuno82fe5rmtp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgeap7rxkuno82fe5rmtp.png" alt="Allen's architecture invokes a lambda from an Amplify status event which triggers a step function that stores publish status in DynamoDB as it posts to the target services. Image assets are manually stored in S3" width="800" height="526"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To improve this, we're going to make Amplify optional and add the ability to pull images used in GitHub and re-store them in S3, bringing the S3 bucket into our CloudFormation Stack. 🚀&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmpf80ixx05cioq68y756.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmpf80ixx05cioq68y756.png" alt="Changes in Red.  Make Amplify Optional, Bring the bucket into the stack and have the Ingest Lambda re-store images from GitHub to our bucket." width="800" height="547"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We can start by creating a Construct for the DynamoDB Table &lt;a href="https://github.com/martzcodes/blog-crossposting-automation/blob/main/lib/dyanmo.ts" rel="noopener noreferrer"&gt;&lt;code&gt;lib/dynamo.ts&lt;/code&gt;&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="kd"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;DynamoDb&lt;/span&gt; &lt;span class="kd"&gt;extends&lt;/span&gt; &lt;span class="nc"&gt;Construct&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nl"&gt;table&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;Table&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="nf"&gt;constructor&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;scope&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;Construct&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;id&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kr"&gt;string&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;super&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;scope&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;id&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;table&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Table&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s2"&gt;`ActivityPubTable`&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="na"&gt;partitionKey&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;pk&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;AttributeType&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;STRING&lt;/span&gt; &lt;span class="p"&gt;},&lt;/span&gt;
      &lt;span class="na"&gt;sortKey&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;sk&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;AttributeType&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;STRING&lt;/span&gt; &lt;span class="p"&gt;},&lt;/span&gt;
      &lt;span class="na"&gt;billingMode&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;BillingMode&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;PAY_PER_REQUEST&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="na"&gt;timeToLiveAttribute&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;ttl&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="na"&gt;removalPolicy&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;RemovalPolicy&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;DESTROY&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="p"&gt;});&lt;/span&gt;

    &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;table&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;addGlobalSecondaryIndex&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
      &lt;span class="na"&gt;indexName&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;GSI1&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="na"&gt;partitionKey&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;GSI1PK&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;AttributeType&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;STRING&lt;/span&gt; &lt;span class="p"&gt;},&lt;/span&gt;
      &lt;span class="na"&gt;sortKey&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;GSI1SK&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;AttributeType&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;STRING&lt;/span&gt; &lt;span class="p"&gt;},&lt;/span&gt;
      &lt;span class="na"&gt;projectionType&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;ProjectionType&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;ALL&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="p"&gt;});&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We create the secret via CDK (and then manually put the secrets into it):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;secret&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;Secret&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;fromSecretNameV2&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
  &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="s2"&gt;`CrosspostSecrets`&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="nx"&gt;secretName&lt;/span&gt;
&lt;span class="p"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Allen had a single lambda do the data transformations for the three blog services... I opted to split that up for better traceability. My architecture will end up having somewhere between 3-7 lambdas depending on what options you turned on. The lambdas are only created if you pass in the properties. They're all created the same general way (&lt;em&gt;side note... I also updated Allen's code from javascript to TypeScript #scopecreep&lt;/em&gt;):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;lambdaProps&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;NodejsFunctionProps&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="na"&gt;architecture&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;Architecture&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;ARM_64&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;memorySize&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;1024&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;timeout&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;Duration&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;minutes&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;5&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
  &lt;span class="na"&gt;runtime&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;Runtime&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;NODEJS_18_X&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;environment&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="na"&gt;TABLE_NAME&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;table&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;tableName&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;SECRET_ID&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;secret&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;secretName&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="p"&gt;},&lt;/span&gt;
&lt;span class="p"&gt;};&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;sendApiRequestFn&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;NodejsFunction&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s2"&gt;`SendApiRequestFn`&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="p"&gt;...&lt;/span&gt;&lt;span class="nx"&gt;lambdaProps&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;entry&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nf"&gt;join&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;__dirname&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s2"&gt;`../functions/send-api-request.ts`&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;
&lt;span class="nx"&gt;sendApiRequestFn&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;addEnvironment&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;DRY_RUN&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;dryRun&lt;/span&gt; &lt;span class="p"&gt;?&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;1&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt; &lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;0&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="nx"&gt;secret&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;grantRead&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;sendApiRequestFn&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Finally, we need to create the Step Function. Step Functions are notoriously hard to code since they use Amazon States Language (ASL) to define all of the steps. I created a separate &lt;a href="https://github.com/martzcodes/blog-crossposting-automation/blob/main/lib/step-function.ts" rel="noopener noreferrer"&gt;CrossPostStepFunction&lt;/a&gt; construct.&lt;/p&gt;

&lt;p&gt;My step function adds the ability to pick which service creates the Canonical URL and it will first post to that service... get the canonical URL and use that in the subsequent services. There's also a lot of logic to remove things from the State Machine if properties weren't configured... which makes this very flexible.&lt;/p&gt;

&lt;p&gt;We were able to abstract out the process for posting to a service to a &lt;a href="https://docs.aws.amazon.com/cdk/api/v2/docs/aws-cdk-lib.aws_stepfunctions.StateMachineFragment.html" rel="noopener noreferrer"&gt;State Machine Fragment&lt;/a&gt;. This fragment is a CDK construct that allows us to re-use a lot of the underlying logic used in the parallel paths for posting to the services. When I configured my stack to not send status emails, not use Hugo and have Hashnode be the primary blog post we get a Step Function that looks like this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffj8zdciyaiu3s8xk544w.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffj8zdciyaiu3s8xk544w.png" width="800" height="999"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;A lot of this is 1:1 with what Allen had in his &lt;a href="https://github.com/aws-community-projects/blog-crossposting-automation/blob/main/workflows/cross-post.asl.json" rel="noopener noreferrer"&gt;ASL json file&lt;/a&gt;. One interesting fact is that Allen's JSON is 953 lines while my two files of TypeScript code that make up the Step Function ends up being 596 lines (431 + 165)... so &lt;em&gt;almost&lt;/em&gt; half while adding a few additional features.&lt;/p&gt;

&lt;h2&gt;
  
  
  Adding a Direct GitHub Webhook Integration
&lt;/h2&gt;

&lt;p&gt;For our next trick, we will use GitHub Webhook events to trigger our cross-posting, instead of Amplify Events. We can do this by adding a Function URL to the Identify Content Lambda:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;fnUrl&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;identifyNewContentFn&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;addFunctionUrl&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
  &lt;span class="na"&gt;authType&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;FunctionUrlAuthType&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;NONE&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;cors&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="na"&gt;allowedOrigins&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;*&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
  &lt;span class="p"&gt;},&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;
&lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;CfnOutput&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s2"&gt;`GithubWebhook`&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;fnUrl&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;url&lt;/span&gt; &lt;span class="p"&gt;});&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We can then enter this lambda into our GitHub repo's webhook settings so that it will use the webhook for Push events. This enables us to skip some of the identify lambda's code... since the push event happens for every commit and pre-includes the list of files that were added.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;handler&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;async &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;event&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kr"&gt;any&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="k"&gt;try&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;initializeOctokit&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;

    &lt;span class="kd"&gt;let&lt;/span&gt; &lt;span class="na"&gt;newContent&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;fileName&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kr"&gt;string&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="nl"&gt;commit&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kr"&gt;string&lt;/span&gt; &lt;span class="p"&gt;}[]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[];&lt;/span&gt;
    &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;event&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;body&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;body&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;JSON&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;parse&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;event&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;body&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
      &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;JSON&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;stringify&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="nx"&gt;body&lt;/span&gt; &lt;span class="p"&gt;},&lt;/span&gt; &lt;span class="kc"&gt;null&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;));&lt;/span&gt;
      &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;body&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;commits&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="nx"&gt;newContent&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;body&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;commits&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;reduce&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
          &lt;span class="p"&gt;(&lt;/span&gt;
            &lt;span class="na"&gt;p&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;fileName&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kr"&gt;string&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="nl"&gt;commit&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kr"&gt;string&lt;/span&gt; &lt;span class="p"&gt;}[],&lt;/span&gt;
            &lt;span class="na"&gt;commit&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
              &lt;span class="na"&gt;id&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kr"&gt;string&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
              &lt;span class="nl"&gt;added&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kr"&gt;string&lt;/span&gt;&lt;span class="p"&gt;[];&lt;/span&gt;
              &lt;span class="nl"&gt;modified&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kr"&gt;string&lt;/span&gt;&lt;span class="p"&gt;[];&lt;/span&gt;
              &lt;span class="c1"&gt;// ... there is more stuff here, but this is all we need&lt;/span&gt;
            &lt;span class="p"&gt;}&lt;/span&gt;
          &lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;addedFiles&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;commit&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;added&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;filter&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
              &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="na"&gt;addedFile&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kr"&gt;string&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt;
                &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;!&lt;/span&gt;&lt;span class="nx"&gt;blogPathDefined&lt;/span&gt; &lt;span class="o"&gt;||&lt;/span&gt;
                  &lt;span class="nx"&gt;addedFile&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;startsWith&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;process&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;env&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;BLOG_PATH&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;/`&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt;
                &lt;span class="nx"&gt;addedFile&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;endsWith&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;.md&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
            &lt;span class="p"&gt;);&lt;/span&gt;
            &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
              &lt;span class="p"&gt;...&lt;/span&gt;&lt;span class="nx"&gt;p&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
              &lt;span class="p"&gt;...&lt;/span&gt;&lt;span class="nx"&gt;addedFiles&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;map&lt;/span&gt;&lt;span class="p"&gt;((&lt;/span&gt;&lt;span class="nx"&gt;addedFile&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;({&lt;/span&gt;
                &lt;span class="na"&gt;fileName&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;addedFile&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
                &lt;span class="na"&gt;commit&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;commit&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
              &lt;span class="p"&gt;})),&lt;/span&gt;
            &lt;span class="p"&gt;];&lt;/span&gt;
          &lt;span class="p"&gt;},&lt;/span&gt;
          &lt;span class="p"&gt;[]&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;fileName&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kr"&gt;string&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="nl"&gt;commit&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kr"&gt;string&lt;/span&gt; &lt;span class="p"&gt;}[]&lt;/span&gt;
        &lt;span class="p"&gt;);&lt;/span&gt;
      &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;else&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;recentCommits&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;getRecentCommits&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
        &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;recentCommits&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;length&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
          &lt;span class="nx"&gt;newContent&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;getNewContent&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;recentCommits&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;
      &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;newContent&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;length&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;data&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;getContentData&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;newContent&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
      &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;imagesProcessed&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;saveImagesToS3&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
      &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;processNewContent&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;imagesProcessed&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;catch &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;err&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;error&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;err&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;};&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The webhook's event body includes a list of the commits. The Amplify event doesn't have this same list, so we can save some GitHub API calls here. I &lt;em&gt;think&lt;/em&gt; this would also be compatible with Allen's code (just in case he wants to switch to that 😈).&lt;/p&gt;

&lt;h2&gt;
  
  
  Parse and Store Images in S3
&lt;/h2&gt;

&lt;p&gt;But what do we do about images? My overall idea here (for my personal use) was to use a private GitHub repo to store these posts (to avoid SEO shenanigans) and just use relative image linking within the repo for the draft images... that way I could use VS Code's Markdown Preview or &lt;a href="https://obsidian.md/" rel="noopener noreferrer"&gt;Obsidian.md&lt;/a&gt; to draft my posts. I asked Allen what he does and was surprised to hear that he hasn't automated this part yet... and as part of his writing he manually uploads images to S3.&lt;/p&gt;

&lt;p&gt;So, I got a little creative with some Regular Expressions and parsed out any embedded markdown links... which are formatted with an exclamation point in front of a markdown link (ironically, I can't post an example because my RegExp would incorrectly ingest that 😅)&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;contentData&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nl"&gt;fileName&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kr"&gt;string&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="nl"&gt;commit&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kr"&gt;string&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="nl"&gt;content&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kr"&gt;string&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="nl"&gt;sendStatusEmail&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;boolean&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="p"&gt;}[]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[];&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;imgRegex&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sr"&gt;/!&lt;/span&gt;&lt;span class="se"&gt;\[(&lt;/span&gt;&lt;span class="sr"&gt;.*&lt;/span&gt;&lt;span class="se"&gt;?)\]\((&lt;/span&gt;&lt;span class="sr"&gt;.*&lt;/span&gt;&lt;span class="se"&gt;?)\)&lt;/span&gt;&lt;span class="sr"&gt;/g&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;for &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kd"&gt;let&lt;/span&gt; &lt;span class="nx"&gt;j&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="nx"&gt;j&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&lt;/span&gt; &lt;span class="nx"&gt;newContent&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;length&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="nx"&gt;j&lt;/span&gt;&lt;span class="o"&gt;++&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;workingContent&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="p"&gt;...&lt;/span&gt;&lt;span class="nx"&gt;newContent&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;j&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="p"&gt;};&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;imageSet&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nb"&gt;Set&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="kr"&gt;string&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&lt;span class="p"&gt;([]);&lt;/span&gt;
  &lt;span class="kd"&gt;let&lt;/span&gt; &lt;span class="nx"&gt;match&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="k"&gt;while &lt;/span&gt;&lt;span class="p"&gt;((&lt;/span&gt;&lt;span class="nx"&gt;match&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;imgRegex&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;exec&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;newContent&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;j&lt;/span&gt;&lt;span class="p"&gt;].&lt;/span&gt;&lt;span class="nx"&gt;content&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt; &lt;span class="o"&gt;!==&lt;/span&gt; &lt;span class="kc"&gt;null&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;imageSet&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;add&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;match&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;]);&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;images&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[...&lt;/span&gt;&lt;span class="nx"&gt;imageSet&lt;/span&gt;&lt;span class="p"&gt;];&lt;/span&gt;
  &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;images&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;length&lt;/span&gt; &lt;span class="o"&gt;===&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="c1"&gt;// no images in the post... passthrough&lt;/span&gt;
    &lt;span class="nx"&gt;contentData&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;push&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;newContent&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;j&lt;/span&gt;&lt;span class="p"&gt;]);&lt;/span&gt;
    &lt;span class="k"&gt;continue&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;blogFile&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;newContent&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;j&lt;/span&gt;&lt;span class="p"&gt;].&lt;/span&gt;&lt;span class="nx"&gt;fileName&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;blogSplit&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;blogFile&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;split&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;/&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="nx"&gt;blogSplit&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;pop&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;blogBase&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;blogSplit&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;join&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;/&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;s3Mapping&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;Record&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="kr"&gt;string&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="kr"&gt;string&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{};&lt;/span&gt;
  &lt;span class="k"&gt;for &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kd"&gt;let&lt;/span&gt; &lt;span class="nx"&gt;k&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="nx"&gt;k&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&lt;/span&gt; &lt;span class="nx"&gt;images&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;length&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="nx"&gt;k&lt;/span&gt;&lt;span class="o"&gt;++&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;image&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;images&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;k&lt;/span&gt;&lt;span class="p"&gt;];&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;githubPath&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;join&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;blogBase&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;image&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;imageSplit&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;image&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;split&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;.&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;imageExtension&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;imageSplit&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;imageSplit&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;length&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;];&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;s3Path&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;blogFile&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;/&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;k&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;.&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;imageExtension&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;replace&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sr"&gt;/&lt;/span&gt;&lt;span class="se"&gt;\ &lt;/span&gt;&lt;span class="sr"&gt;/g&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;-&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;s3Url&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;`https://s3.amazonaws.com/&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;process&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;env&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;MEDIA_BUCKET&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;/&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;s3Path&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;postContent&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;octokit&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;request&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
      &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;GET /repos/{owner}/{repo}/contents/{path}&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="na"&gt;owner&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;process&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;env&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;OWNER&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="na"&gt;repo&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;process&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;env&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;REPO&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;githubPath&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;);&lt;/span&gt;

    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;buffer&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;Buffer&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="k"&gt;from&lt;/span&gt;&lt;span class="p"&gt;((&lt;/span&gt;&lt;span class="nx"&gt;postContent&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;data&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="kr"&gt;any&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nx"&gt;content&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;base64&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

    &lt;span class="c1"&gt;// upload images to s3&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;putImage&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;PutObjectCommand&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
      &lt;span class="na"&gt;Bucket&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;process&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;env&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;MEDIA_BUCKET&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="na"&gt;Key&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;s3Path&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="na"&gt;Body&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;buffer&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="p"&gt;});&lt;/span&gt;
    &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;s3&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;send&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;putImage&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

    &lt;span class="nx"&gt;s3Mapping&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;image&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;s3Url&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;rewriteLink&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;match&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kr"&gt;string&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;text&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kr"&gt;string&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;url&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kr"&gt;string&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="s2"&gt;`![&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;text&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;](&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;s3Mapping&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;url&lt;/span&gt;&lt;span class="p"&gt;]}&lt;/span&gt;&lt;span class="s2"&gt;)`&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="nx"&gt;workingContent&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;content&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;workingContent&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;content&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;replace&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;imgRegex&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;rewriteLink&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="nx"&gt;contentData&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;push&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;workingContent&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nx"&gt;contentData&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This code parses out the image links, fetches them from GitHub, uploads them to S3 and replaces the public S3 URLs in the blog post before proceeding. I suppose a requirement here is that the images are required to be in GitHub and if they aren't... things will break. That would be an easy thing for someone to fix/make more flexible 😉&lt;/p&gt;

&lt;h2&gt;
  
  
  Final Thoughts
&lt;/h2&gt;

&lt;p&gt;That was quite the journey! Although this project took me longer than expected, it was still a lot of fun to work on. I had a tough time trying to limit myself from adding more and more features to it. 😂&lt;/p&gt;

&lt;p&gt;After spending some time with it, I'm not entirely convinced that the return on investment is worth it &lt;em&gt;for my specific needs&lt;/em&gt;. I only post to two platforms, hashnode and &lt;a href="http://dev.to"&gt;dev.to&lt;/a&gt;, it's simple enough for me to copy and paste from one to the other and add the canonical URL to the &lt;a href="http://dev.to"&gt;dev.to&lt;/a&gt; metadata. In fact, the two platforms even have an integration that might allow me to skip the copy/paste step entirely. 🤔&lt;/p&gt;

&lt;p&gt;But even though I may not use this stack myself, I do hope that it showcases the power and flexibility of creating with CDK. In comparing SAM to CDK... the CDK code clocked in at 907 lines of &lt;strong&gt;&lt;em&gt;code&lt;/em&gt;&lt;/strong&gt; (&lt;em&gt;including the Step Function + additional features)&lt;/em&gt; while the SAM YAML + ASL JSON came in at 1259 lines of &lt;strong&gt;&lt;em&gt;configuration&lt;/em&gt;&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;This project would have been much quicker to build if I already had a Hugo/Amplify setup or if I hadn't converted everything to TypeScript or added all the other features. 😅&lt;/p&gt;

&lt;p&gt;What do you think? Have you ever worked on a project that ended up taking longer than you expected? Did you find it hard to limit yourself from adding more and more features? What do you think about the differences between SAM and CDK here? Let's chat about it! 💬&lt;/p&gt;

</description>
      <category>welcome</category>
    </item>
    <item>
      <title>Core Web Vitals, CDK Constructs and YOU!</title>
      <dc:creator>Matt Martz</dc:creator>
      <pubDate>Wed, 01 Feb 2023 23:01:53 +0000</pubDate>
      <link>https://forem.com/aws-builders/core-web-vitals-cdk-constructs-and-you-3mca</link>
      <guid>https://forem.com/aws-builders/core-web-vitals-cdk-constructs-and-you-3mca</guid>
      <description>&lt;p&gt;As a web developer, you know the importance of delivering a fast and smooth user experience. But with the constantly evolving web landscape, it can be challenging to keep up with the latest best practices. In this blog post, I'll take you through the ins and outs of integrating Core Web Vitals into your development projects. With its focus on real-world user experience, Core Web Vitals is quickly becoming an essential part of any web development process. So buckle up, grab a coffee and let's dive in together to see how you can elevate your website's performance and provide a top-notch user experience for your visitors!&lt;/p&gt;

&lt;h2&gt;
  
  
  What are Core Web Vitals?
&lt;/h2&gt;

&lt;p&gt;Core Web Vitals are a set of metrics defined by Google to measure the user experience on the web. They focus on the key aspects of website performance that directly impact user experience, such as loading speed, interactivity, and visual stability. The basics of Core Web Vitals are:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Largest Contentful Paint (LCP): measures loading performance and is calculated as the time it takes for the largest content element on the page (e.g. an image or text block) to load and become visible to the user.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;First Input Delay (FID): measures interactivity and is calculated as the time it takes for a page to become responsive after a user first interacts with it (e.g. clicks a button).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Cumulative Layout Shift (CLS): measures visual stability and is calculated as the total amount of unexpected layout shifts that occur during the page load.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;These metrics are considered important because they directly impact user experience and are tied to search ranking factors. Websites that score well on Core Web Vitals are likely to have better search engine rankings and provide a better user experience. In other words, they lead to:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Happy Users: By optimizing Core Web Vitals, your website will load faster, be more interactive, and have less visual instability. This leads to a better overall user experience, which means visitors will stay on your site longer and be more likely to return in the future.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Better SEO: Google uses Core Web Vitals as part of its ranking algorithm, so websites that score well on these metrics are likely to have better search engine rankings. That means more visibility, more traffic, and more potential customers!&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Increased Conversions: A great user experience leads to higher engagement and increased conversions. By optimizing Core Web Vitals, you'll give your visitors a smooth and seamless experience that will keep them coming back for more.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Industry Standard: Core Web Vitals are becoming the industry standard for measuring website performance and user experience. By optimizing these metrics, you'll ensure that your website is up-to-date and providing the best possible experience for your visitors.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Real User Monitoring with CloudWatch RUM and CDK
&lt;/h2&gt;

&lt;p&gt;Let's get started with Core Web Vitals by writing some reusable components: A CDK Construct and Typescript snippet so that we can add CloudWatch RUM (Real User Monitoring) to our application.&lt;/p&gt;

&lt;p&gt;The code for this section is located at &lt;a href="http://github.com/martzcodes/blog-cdk-rum" rel="noopener noreferrer"&gt;http://github.com/martzcodes/blog-cdk-rum&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We're going to start with a baseline project that includes a simple website deployed to S3 and hosted by a CloudFront distribution. This is similar to the site we created in my article about how to &lt;a href="https://matt.martz.codes/protect-a-static-site-with-auth0-using-lambdaedge-and-cloudfront" rel="noopener noreferrer"&gt;Protect a Static Site with Auth0 Using Lambda@Edge and CloudFront&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We're going to focus on the 3 most important files:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://github.com/martzcodes/blog-cdk-rum/blob/main/lib/rum-runner-construct.ts" rel="noopener noreferrer"&gt;&lt;code&gt;/lib/rum-runner-construct.ts&lt;/code&gt; - CDK Construct&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://github.com/martzcodes/blog-cdk-rum/blob/main/lib/rum-runner-fn.ts" rel="noopener noreferrer"&gt;&lt;code&gt;/lib/rum-runner-fn.ts&lt;/code&gt; - Custom Resource Function&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://github.com/martzcodes/blog-cdk-rum/blob/main/ui/rum.ts" rel="noopener noreferrer"&gt;&lt;code&gt;/ui/rum.ts&lt;/code&gt; - Typescript Snippet to add to our front-end code&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  CDK Construct
&lt;/h3&gt;

&lt;p&gt;Our &lt;code&gt;RumRunnerConstruct&lt;/code&gt;, &lt;a href="https://github.com/martzcodes/blog-cdk-rum/blob/main/lib/rum-runner-construct.ts" rel="noopener noreferrer"&gt;&lt;code&gt;/lib/rum-runner-construct.ts&lt;/code&gt;&lt;/a&gt;, will take in two properties to its interface:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;bucket&lt;/code&gt; - the UI's deployment bucket&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;cloudFront&lt;/code&gt; - the UI's CloudFront distribution&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;CloudWatch RUM requires a Cognito Identity Pool to allow unauthenticated access for the CloudWatch RUM web client to publish events.&lt;/p&gt;

&lt;p&gt;We create the Identity Pool&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;cwRumIdentityPool&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;CfnIdentityPool&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
  &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;cw-rum-identity-pool&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;allowUnauthenticatedIdentities&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;A role for unauthenticated users to use:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;cwRumUnauthenticatedRole&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Role&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
  &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;cw-rum-unauthenticated-role&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="na"&gt;assumedBy&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;FederatedPrincipal&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
      &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;cognito-identity.amazonaws.com&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="na"&gt;StringEquals&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
          &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;cognito-identity.amazonaws.com:aud&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;cwRumIdentityPool&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;ref&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="p"&gt;},&lt;/span&gt;
        &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;ForAnyValue:StringLike&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
          &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;cognito-identity.amazonaws.com:amr&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;unauthenticated&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="p"&gt;},&lt;/span&gt;
      &lt;span class="p"&gt;},&lt;/span&gt;
      &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;sts:AssumeRoleWithWebIdentity&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;
    &lt;span class="p"&gt;),&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And we make sure that role has access to put events into CloudWatch RUM:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="nx"&gt;cwRumUnauthenticatedRole&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;addToPolicy&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
  &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;PolicyStatement&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
    &lt;span class="na"&gt;effect&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;Effect&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;ALLOW&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;actions&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;rum:PutRumEvents&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
    &lt;span class="na"&gt;resources&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
      &lt;span class="s2"&gt;`arn:aws:rum:&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;Stack&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="k"&gt;of&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nx"&gt;region&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;:&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;
        &lt;span class="nx"&gt;Stack&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="k"&gt;of&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nx"&gt;account&lt;/span&gt;
      &lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;:appmonitor/&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;cloudFront&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;distributionDomainName&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="p"&gt;],&lt;/span&gt;
  &lt;span class="p"&gt;})&lt;/span&gt;
&lt;span class="p"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then we attach the role to the unauthenticated users:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;CfnIdentityPoolRoleAttachment&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
  &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;cw-rum-identity-pool-role-attachment&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="na"&gt;identityPoolId&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;cwRumIdentityPool&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;ref&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;roles&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="na"&gt;unauthenticated&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;cwRumUnauthenticatedRole&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;roleArn&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="p"&gt;},&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Next, we need to create the app monitor, which we do by using the Level 1 CDK Construct &lt;code&gt;CfnAppMonitor&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;cwRumAppMonitor&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;CfnAppMonitor&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;cw-rum-app-monitor&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="na"&gt;domain&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;cloudFront&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;distributionDomainName&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;cloudFront&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;distributionDomainName&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;appMonitorConfiguration&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="na"&gt;allowCookies&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;enableXRay&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;sessionSampleRate&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;telemetries&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;errors&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;performance&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;http&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
    &lt;span class="na"&gt;identityPoolId&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;cwRumIdentityPool&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;ref&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;guestRoleArn&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;cwRumUnauthenticatedRole&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;roleArn&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="p"&gt;},&lt;/span&gt;
  &lt;span class="na"&gt;cwLogEnabled&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now, to automatically include the newly created app monitor client into our app... we need to take a few extra steps in our Construct. We'll need to run a Custom Resource to fetch some metadata and store it in the UI's bucket for the UI to load. The RUM App Monitor web client needs the CloudWatch RUM App Monitor Id, the Role ARN and the Identity Pool Id.&lt;/p&gt;

&lt;p&gt;We create the NodeJS Function:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;rumRunnerFn&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;NodejsFunction&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;rum-runner&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="na"&gt;runtime&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;Runtime&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;NODEJS_18_X&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;environment&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="na"&gt;BUCKET_NAME&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;bucket&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;bucketName&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;RUM_APP&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;cloudFront&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;distributionDomainName&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;GUEST_ROLE_ARN&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;cwRumUnauthenticatedRole&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;roleArn&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;IDENTITY_POOL_ID&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;cwRumIdentityPool&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;ref&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="p"&gt;},&lt;/span&gt;
  &lt;span class="na"&gt;timeout&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;Duration&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;seconds&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;30&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
  &lt;span class="na"&gt;entry&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nf"&gt;join&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;__dirname&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;./rum-runner-fn.ts&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And give it the right permissions:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="nx"&gt;bucket&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;grantWrite&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;rumRunnerFn&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="nx"&gt;rumRunnerFn&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;addToRolePolicy&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
  &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;PolicyStatement&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
    &lt;span class="na"&gt;effect&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;Effect&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;ALLOW&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;resources&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
      &lt;span class="s2"&gt;`arn:aws:rum:&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;Stack&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="k"&gt;of&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nx"&gt;region&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;:&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;
        &lt;span class="nx"&gt;Stack&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="k"&gt;of&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nx"&gt;account&lt;/span&gt;
      &lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;:appmonitor/&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;cloudFront&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;distributionDomainName&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="p"&gt;],&lt;/span&gt;
    &lt;span class="na"&gt;actions&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;rum:GetAppMonitor&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
  &lt;span class="p"&gt;})&lt;/span&gt;
&lt;span class="p"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then we create the Custom Resource, with a dependency on the App Monitor so that it runs AFTER the App Monitor is created:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;rumRunnerProvider&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Provider&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;rum-runner-provider&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="na"&gt;onEventHandler&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;rumRunnerFn&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;customResource&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;CustomResource&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;rum-runner-resource&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="na"&gt;serviceToken&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;rumRunnerProvider&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;serviceToken&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;properties&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="c1"&gt;// Bump to force an update&lt;/span&gt;
    &lt;span class="na"&gt;Version&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;2&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="p"&gt;},&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;

&lt;span class="nx"&gt;customResource&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;node&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;addDependency&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;cwRumAppMonitor&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Custom Resource Function
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/template-custom-resources.html" rel="noopener noreferrer"&gt;CloudFormation Custom Resources&lt;/a&gt; are a way to write some provisioning logic that is executed as part of a CDK (CloudFormation) deployment. The Custom Resource will invoke our lambda (&lt;a href="https://github.com/martzcodes/blog-cdk-rum/blob/main/lib/rum-runner-fn.ts" rel="noopener noreferrer"&gt;&lt;code&gt;/lib/rum-runner-fn.ts&lt;/code&gt;&lt;/a&gt; ) which will use the AWS SDK to fetch the App Monitor Id after it's created and store it in S3, along with the Role ARN and Identity Pool Id.&lt;/p&gt;

&lt;p&gt;We import and create two clients:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;S3Client&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;PutObjectCommand&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;@aws-sdk/client-s3&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;RUMClient&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;GetAppMonitorCommand&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;@aws-sdk/client-rum&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;s3&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;S3Client&lt;/span&gt;&lt;span class="p"&gt;({});&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;rum&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;RUMClient&lt;/span&gt;&lt;span class="p"&gt;({});&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Fetch the App Monitor config using the RUMClient:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;handler&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;async &lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;app&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;rum&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;send&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;GetAppMonitorCommand&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;Name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;process&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;env&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;RUM_APP&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;`&lt;/span&gt; &lt;span class="p"&gt;})&lt;/span&gt;
  &lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;};&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And then upload them to the UI bucket:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;handler&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;async &lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="c1"&gt;// ...&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;command&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;PutObjectCommand&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
    &lt;span class="na"&gt;Key&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;rum.json&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;Bucket&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;process&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;env&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;BUCKET_NAME&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;Body&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;JSON&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;stringify&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
      &lt;span class="na"&gt;APPLICATION_ID&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;app&lt;/span&gt;&lt;span class="p"&gt;?.&lt;/span&gt;&lt;span class="nx"&gt;AppMonitor&lt;/span&gt;&lt;span class="p"&gt;?.&lt;/span&gt;&lt;span class="nx"&gt;Id&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="na"&gt;guestRoleArn&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;process&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;env&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;GUEST_ROLE_ARN&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="na"&gt;identityPoolId&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;process&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;env&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;IDENTITY_POOL_ID&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="p"&gt;}),&lt;/span&gt;
  &lt;span class="p"&gt;});&lt;/span&gt;
  &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;s3&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;send&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;command&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;};&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Typescript Snippet
&lt;/h3&gt;

&lt;p&gt;The final piece of the puzzle is injecting this into our front end. The front-end will need to include our front-end typescript snippet&lt;a href="https://github.com/martzcodes/blog-cdk-rum/blob/main/ui/rum.ts" rel="noopener noreferrer"&gt;&lt;code&gt;/ui/rum.ts&lt;/code&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This snippet extends the official/boilerplate snippet, that you would download from the CloudWatch RUM Console, by including some code to fetch the required metadata we stored using the Custom Resource above.&lt;/p&gt;

&lt;p&gt;We &lt;em&gt;fetch&lt;/em&gt; the configuration by calling:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;res&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;fetch&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;/rum.json&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;rum&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;res&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;json&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And then using the fetched metadata in the configuration of the web client:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;config&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;AwsRumConfig&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="na"&gt;sessionSampleRate&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;guestRoleArn&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;rum&lt;/span&gt;&lt;span class="p"&gt;?.&lt;/span&gt;&lt;span class="nx"&gt;guestRoleArn&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;identityPoolId&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;rum&lt;/span&gt;&lt;span class="p"&gt;?.&lt;/span&gt;&lt;span class="nx"&gt;identityPoolId&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;endpoint&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;https://dataplane.rum.us-east-1.amazonaws.com&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;telemetries&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;errors&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;performance&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;http&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
  &lt;span class="na"&gt;allowCookies&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;enableXRay&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;
&lt;span class="p"&gt;};&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;APPLICATION_ID&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kr"&gt;string&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;rum&lt;/span&gt;&lt;span class="p"&gt;?.&lt;/span&gt;&lt;span class="nx"&gt;APPLICATION_ID&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;APPLICATION_VERSION&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kr"&gt;string&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;1.0.0&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;APPLICATION_REGION&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kr"&gt;string&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;us-east-1&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;APPLICATION_ID&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;awsRum&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;AwsRum&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;AwsRum&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="nx"&gt;APPLICATION_ID&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="nx"&gt;APPLICATION_VERSION&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="nx"&gt;APPLICATION_REGION&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="nx"&gt;config&lt;/span&gt;
  &lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;awsRum&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then we only need to make sure we import and run the &lt;code&gt;rumRunner&lt;/code&gt; function at the start of our front-end code (&lt;code&gt;ui/main.ts&lt;/code&gt;):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;rumRunner&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;./rum&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="nf"&gt;rumRunner&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Deploying and Integrating with Other Web Applications
&lt;/h2&gt;

&lt;p&gt;By deploying this code we can start to gain insights from our applications including Core Web Vitals and other performance metrics.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fouwvtt94vm5ccz6er3pl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fouwvtt94vm5ccz6er3pl.png" alt="Image description" width="800" height="492"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;By packaging this as a CDK Construct and Typescript snippet... we could easily add this to other CDK projects by adding in FOUR lines of code 🤯&lt;/p&gt;

&lt;p&gt;Two lines in your CDK App:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;RumRunnerConstruct&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;./rum-runner-construct&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;RumRunnerConstruct&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s2"&gt;`Rum`&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;bucket&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;cloudFront&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;});&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And two lines in your front-end app:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;rumRunner&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;./rum&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="nf"&gt;rumRunner&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;(and I'm being generous with the line counts)&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Measuring the quality of user experience is key to delivering a seamless and enjoyable experience for your website visitors. Core Web Vitals play a crucial role in this measurement, providing objective metrics to assess the performance of your website. To make the most of these metrics, it's important to consider Core Web Vitals early on in the development process. One way to do this is by using reusable CDK constructs, which can help you identify areas for improvement and optimize the user experience of your website.&lt;/p&gt;

</description>
      <category>crypto</category>
      <category>blockchain</category>
      <category>web3</category>
    </item>
    <item>
      <title>Automate Documenting EventBridge Schemas in EventCatalog</title>
      <dc:creator>Matt Martz</dc:creator>
      <pubDate>Thu, 27 Oct 2022 12:46:15 +0000</pubDate>
      <link>https://forem.com/aws-builders/automate-documenting-eventbridge-schemas-in-eventcatalog-5dic</link>
      <guid>https://forem.com/aws-builders/automate-documenting-eventbridge-schemas-in-eventcatalog-5dic</guid>
      <description>&lt;p&gt;In this series we're going to SUPERCHARGE developer experience by implementing &lt;em&gt;Event Driven Documentation&lt;/em&gt;. In &lt;a href="https://dev.to/martzcodes/using-aws-cdk-to-deploy-eventcatalog-4cn1-temp-slug-4011967"&gt;part 1&lt;/a&gt; we used CDK to deploy &lt;a href="https://eventcatalog.dev" rel="noopener noreferrer"&gt;EventCatalog&lt;/a&gt; to a custom domain using CloudFront and S3. In &lt;a href="https://dev.to/martzcodes/automate-documenting-api-gateways-in-eventcatalog-328c-temp-slug-2730949"&gt;part 2&lt;/a&gt; we used &lt;a href="https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-service-event.html" rel="noopener noreferrer"&gt;AWS Service Events&lt;/a&gt; from CloudFormation to detect when an API Gateway has deployed and export the &lt;a href="https://www.openapis.org" rel="noopener noreferrer"&gt;OpenAPI&lt;/a&gt; spec from AWS to bundle it in our EventCatalog. In this post, we'll export the JSONSchema of EventBridge Events using schema discovery and bundle them into the EventCatalog.&lt;/p&gt;

&lt;p&gt;🛑 &lt;em&gt;Not sure where to start with CDK? See my &lt;a href="https://youtu.be/T-H4nJQyMig" rel="noopener noreferrer"&gt;CDK Crash Course on freeCodeCamp&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;The architecture we'll be deploying with CDK is:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6qr3yuzthtu2pc0rjqzf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6qr3yuzthtu2pc0rjqzf.png" alt="Dev Portal - Blog Arch.png" width="800" height="380"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In this part we'll focus on the final bit of architecture for subscribing to EventBridge Schema Registry Events and bootstrapping them into the EventCatalog. We'll also talk about strategies for integrating this into CI/CD to make it fully automated.&lt;/p&gt;

&lt;p&gt;💻 The code for this series is published here: &lt;a href="https://github.com/martzcodes/blog-event-driven-documentation" rel="noopener noreferrer"&gt;https://github.com/martzcodes/blog-event-driven-documentation&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;🤔 If you have any architecture or post questions/feedback... feel free to hit me up on Twitter &lt;a href="https://twitter.com/martzcodes" rel="noopener noreferrer"&gt;@martzcodes&lt;/a&gt;.&lt;/p&gt;

&lt;h1&gt;
  
  
  EventBridge Schema Discovery
&lt;/h1&gt;

&lt;p&gt;Amazon EventBridge offers a &lt;a href="https://aws.amazon.com/blogs/compute/introducing-amazon-eventbridge-schema-registry-and-discovery-in-preview/" rel="noopener noreferrer"&gt;Schema Registry and Discovery&lt;/a&gt; feature. This feature monitors Event traffic and creates JSON Schemas based on the events it sees. The awesome thing about this is every time it creates a new schema or updates a new one... it emits an AWS Event that we can trigger off of! We'll use these events to export the discovered event's schema and bundle them in to EventCatalog, similar to how we did with API Gateways in part 2.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;If you have inconsistent Event Schemas (schemas with "optional" fields) a new version will be created every time the optional fields appear/disappear. &lt;strong&gt;A best practice for Event Schemas would be to make sure the event interfaces stay consistent (no optional fields and try not to use objects with changing keys).&lt;/strong&gt;&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Enabling Schema Discovery
&lt;/h2&gt;

&lt;p&gt;First, we'll create a new construct for our Account Stack:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;export interface EventsConstructProps {
  bus: IEventBus;
  specBucket: Bucket;
}

export class EventsConstruct extends Construct {
  constructor(scope: Construct, id: string, props: EventsConstructProps) {
    super(scope, id);
    const { bus, specBucket } = props;

    new CfnDiscoverer(this, `Discoverer`, {
      sourceArn: bus.eventBusArn,
      description: "Schema Discoverer",
      crossAccount: false,
    });
  }
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This construct uses CDK's level 1 construct called &lt;a href="https://docs.aws.amazon.com/cdk/api/v2/docs/aws-cdk-lib.aws_eventschemas.CfnDiscoverer.html" rel="noopener noreferrer"&gt;&lt;code&gt;CfnDiscoverer&lt;/code&gt;&lt;/a&gt;. We provide it with our default bus and tell it not to track events that came from outside of the account we're currently in (that could get noisy).&lt;/p&gt;

&lt;p&gt;🌈 &lt;em&gt;Level 1 Constructs are 1:1 mappings with the equivalent CloudFormation(e.g. &lt;a href="https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-eventschemas-discoverer.html" rel="noopener noreferrer"&gt;CloudFormation&lt;/a&gt; vs &lt;a href="https://docs.aws.amazon.com/cdk/api/v2/docs/aws-cdk-lib.aws_eventschemas.CfnDiscoverer.html" rel="noopener noreferrer"&gt;CDK L1&lt;/a&gt;)&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Exporting Event Schemas
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Creating the Infrastructure
&lt;/h3&gt;

&lt;p&gt;With Schema Discovery enabled, we can create our lambda and invoke that lambda based on the AWS Service Events.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const eventsFn = new NodejsFunction(this, `eventsFn`, {
  runtime: Runtime.NODEJS_16_X,
  entry: join(__dirname, `./events-lambda.ts`),
  logRetention: RetentionDays.ONE_DAY,
  initialPolicy: [
    new PolicyStatement({
      effect: Effect.ALLOW,
      actions: ["schemas:*"],
      resources: ["*"],
    }),
  ],
});
specBucket.grantReadWrite(eventsFn);
eventsFn.addEnvironment("SPEC_BUCKET", specBucket.bucketName);
bus.grantPutEventsTo(eventsFn);

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We make sure to grant the lambda the right permissions (read/write to the bucket, the bucket name as an environment variable and putEvents to the default bus).&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;new Rule(this, `eventsRule`, {
  eventBus: props.bus,
  eventPattern: {
    source: ["aws.schemas"],
    detailType: ["Schema Created", "Schema Version Created"],
  },
  targets: [new LambdaFunction(eventsFn)],
});

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;AWS Schema events offer two detail types: "Schema Created" and "Schema Version Created". You can see the contents of these on the &lt;a href="https://us-east-1.console.aws.amazon.com/events/home?region=us-east-1#/explore" rel="noopener noreferrer"&gt;Explore page in the EventBridge console&lt;/a&gt;. We invoke our lambda using these detail types.&lt;/p&gt;

&lt;h3&gt;
  
  
  Processing the Events
&lt;/h3&gt;

&lt;p&gt;Unlike Part 2... processing these events are a lot easier because we have everything we need via the Event and only need to make one aws-sdk call. The event includes the Schema Name and Version and we use that to export the JSONSchema via the aws sdk:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const RegistryName = event.detail!.RegistryName;
const SchemaName = event.detail!.SchemaName;
const SchemaVersion = event.detail!.Version;
const SchemaDate = event.detail!.CreationDate;

const exportSchemaCommand = new ExportSchemaCommand({
  RegistryName,
  SchemaName,
  Type: "JSONSchemaDraft4",
});
const schemaResponse = await schemasClient.send(exportSchemaCommand);

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;From there we put it in our spec bucket:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const schema = JSON.parse(schemaResponse.Content);

const fileLoc = {
  Bucket: process.env.SPEC_BUCKET,
  Key: `events/${SchemaName}/spec.json`,
};

const putObjectCommand = new PutObjectCommand({
  ...fileLoc,
  Body: JSON.stringify(schema),
});
await s3.send(putObjectCommand);

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And emit the event with our presigned URL:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const getObjectCommand = new GetObjectCommand({
  ...fileLoc,
});
const url = await getSignedUrl(s3, getObjectCommand, { expiresIn: 60 * 60 });

const eventDetail: EventSchemaEvent = {
  SchemaName,
  SchemaVersion,
  RegistryName,
  SchemaDate,
  url,
};

const putEvent = new PutEventsCommand({
  Entries: [
    {
      Source,
      DetailType: BlogDetailTypes.EVENT,
      Detail: JSON.stringify(eventDetail),
    },
  ],
});
await eb.send(putEvent);

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Updating the Watcher to Copy the Schemas
&lt;/h3&gt;

&lt;p&gt;In Part 2 we added a utility method to our spec construct that creates a lambda with a rule. We need to use that here to add a lambda for these Event schemas:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;this.addRule({
  detailType: BlogDetailTypes.EVENT,
  lambdaName: `eventWatcher`,
});

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This lambda simply copies the spec files using a certain S3 Key naming convention:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;export const handler = async (
  event: EventBridgeEvent&amp;lt;string, EventSchemaEvent&amp;gt;
) =&amp;gt; {
  const res = await fetch(event.detail.url);
  const spec = (await res.json()) as Record&amp;lt;string, any&amp;gt;;

  const fileLoc = {
    Bucket: process.env.SPEC_BUCKET,
    Key: `events/${event.account}/${event.detail.SchemaName}/${event.detail.SchemaVersion}.json`,
  };

  const putObjectCommand = new PutObjectCommand({
    ...fileLoc,
    Body: JSON.stringify(spec, null, 2),
  });
  await s3.send(putObjectCommand);
};

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Bootstrapping the Markdown files for EventCatalog
&lt;/h2&gt;

&lt;p&gt;Now that we have our event JSONSchemas stored in our Watcher's Spec Bucket, we can update our prepare scripts to pull the files and bootstrap them (similar to how we did the API Gateway files in Part 2). One notable difference is that EventCatalog's Event interface offers &lt;a href="https://www.eventcatalog.dev/docs/events/consumers-and-producers" rel="noopener noreferrer"&gt;"Consumers and Producers"&lt;/a&gt; and &lt;a href="https://www.eventcatalog.dev/docs/events/versioning" rel="noopener noreferrer"&gt;Event Versioning&lt;/a&gt;. We're going to create a pseudo-service that represents our Account's EventBus and specify that as the Producer for these events. This is kind of a hack, but it's a useful one. We're also going to create the files needed to version our events.&lt;/p&gt;

&lt;p&gt;The folder structure for a domain will end up looking like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;acct-&amp;lt;account&amp;gt;/
 events/
  blog.dev.catalog@Spec.event/
   index.md
   schema.json
  blog.dev.catalog@Spec.openapi/
   versioned/
    1/
     changelog.md
     index.md
     schema.json
    2/
      changelog.md
      index.md
      schema.json
   index.md
   schema.json
 services/
 &amp;lt;account&amp;gt;-bus/
   index.md
   openapi.json
  iam-backed-api/
    index.md
    openapi.json
 index.md

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Fetch the Events
&lt;/h3&gt;

&lt;p&gt;To fetch the events we use aws-sdk's &lt;code&gt;ListObjectsCommand&lt;/code&gt; to get the files prefixed with &lt;code&gt;events/&lt;/code&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const listBucketObjectsCommand = new ListObjectsCommand({
  Bucket,
  Prefix: "events/",
});
const bucketObjects = await s3Client.send(listBucketObjectsCommand);
const specs = bucketObjects.Contents!.reduce((p, c) =&amp;gt; {
  const key: string = c.Key!;
  const splitKey = key.split("/");
  const account = splitKey[1];
  const schemaName = splitKey[2];
  const schemaVersion = splitKey[3].split(".")[0];
  if (!Object.keys(p).includes(`${account}-${schemaName}`)) {
    return {
      ...p,
      [`${account}-${schemaName}`]: {
        key,
        account,
        schemaName,
        schemaVersion,
        versions: [{ schemaVersion, key }],
      },
    };
  }
  p[`${account}-${schemaName}`].versions.push({ schemaVersion, key });
  return p;
}, {} as Record&amp;lt;string, { key: string; account: string; schemaName: string; schemaVersion: string; versions: { schemaVersion: string; key: string }[] }&amp;gt;);

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We store these S3 keys in an object so that we can determine the latest version of each spec, and we process them by schema.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const specKeys = Object.keys(specs);
for (let j = 0; j &amp;lt; specKeys.length; j++) {
  const specMeta = specs[specKeys[j]];
  const versionInfo = {
    schemaVersion: 0,
    key: "",
    index: -1,
  };
  specMeta.versions.forEach((version, versionInd) =&amp;gt; {
    if (Number(version.schemaVersion) &amp;gt; versionInfo.schemaVersion) {
      versionInfo.schemaVersion = Number(version.schemaVersion);
      versionInfo.key = version.key;
      versionInfo.index = versionInd;
    }
  });
  if (versionInfo.index &amp;gt; -1) {
    specMeta.key = versionInfo.key;
    specMeta.schemaVersion = `latest`;
    specMeta.versions.splice(versionInfo.index, 1);
  }

  const getSpecCommand = new GetObjectCommand({
    Bucket,
    Key: specMeta.key,
  });

  const specObj = await s3Client.send(getSpecCommand);
  const spec = await streamToString(specObj.Body as Readable);
  // ...
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Ensure the Domain folder exists
&lt;/h3&gt;

&lt;p&gt;In Part 2 we created a &lt;code&gt;makeDomain&lt;/code&gt; shared method. To ensure the domain folder exists we just need to call it:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const domainPath = makeDomain(specMeta.account);

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Create the Pseudo Bus Service
&lt;/h3&gt;

&lt;p&gt;Next, we create the pseudo service:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const pseudoServiceName = `${specMeta.account}-bus`;
const pseudoServicePath = join(
  domainPath,
  `./services/${pseudoServiceName}`
);
mkdirSync(pseudoServicePath, { recursive: true });
const apiMd = [
  `---`,
  `name: ${pseudoServiceName}`,
  `summary: |`,
  ` This is a pseudo-service that represents the Default Event Bus in the AWS Account. It isn't a real service.`,
  `owners:`,
  ` - martzcodes`,
  `badges:`,
  ` - content: EventBus`,
  ` backgroundColor: red`,
  ` textColor: red`,
  `---`,
];
writeFileSync(join(pseudoServicePath, `./index.md`), apiMd.join("\n"));

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Create the Events
&lt;/h3&gt;

&lt;p&gt;We create the latest (parent) event:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const eventPath = join(domainPath, `./events/${specMeta.schemaName}`);
mkdirSync(eventPath, { recursive: true });
writeFileSync(join(eventPath, `./schema.json`), spec);
if (!existsSync(join(eventPath, `./index.md`))) {
  const apiMd = [
    `---`,
    `name: ${specMeta.schemaName}`,
    `version: latest`,
    `summary: |`,
    ` This is the automatically stubbed documentation for the ${specMeta.schemaName} Event in the ${specMeta.account} AWS Account.`,
    `producers:`,
    ` - ${pseudoServiceName}`,
    `owners:`,
    ` - martzcodes`,
    `---`,
    ``,
    `&amp;lt;Schema /&amp;gt;`,
  ];
  writeFileSync(join(eventPath, `./index.md`), apiMd.join("\n"));
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Add Versioning
&lt;/h3&gt;

&lt;p&gt;And finally the version:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;for (let k = 0; k &amp;lt; specMeta.versions.length; k++) {
  const specMetaVersion = specMeta.versions[k];

  const getSpecVersionCommand = new GetObjectCommand({
    Bucket,
    Key: specMeta.key,
  });

  const specVersionObj = await s3Client.send(getSpecVersionCommand);
  const specVersion = await streamToString(
    specVersionObj.Body as Readable
  );

  const versionPath = join(
    eventPath,
    `./versioned/${specMetaVersion.schemaVersion}`
  );
  mkdirSync(versionPath, { recursive: true });
  writeFileSync(join(versionPath, `./schema.json`), specVersion);
  const apiMd = [
    `---`,
    `name: ${specMeta.schemaName}`,
    `version: ${specMetaVersion.schemaVersion}`,
    `summary: |`,
    ` This is the automatically stubbed documentation for the ${specMeta.schemaName} Event in the ${specMeta.account} AWS Account. This is an old version of the spec.`,
    `producers:`,
    ` - ${pseudoServiceName}`,
    `owners:`,
    ` - martzcodes`,
    `---`,
    ``,
    `&amp;lt;Schema /&amp;gt;`,
  ];
  writeFileSync(join(versionPath, `./index.md`), apiMd.join("\n"));

  const changelog = [`### Changes`];
  writeFileSync(
    join(versionPath, `./changelog.md`),
    changelog.join("\n")
  );
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h1&gt;
  
  
  The Final Result
&lt;/h1&gt;

&lt;p&gt;You can see this in action at &lt;a href="https://docs.martz.dev" rel="noopener noreferrer"&gt;docs.martz.dev&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsdoli8x3r5w8v98o7151.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsdoli8x3r5w8v98o7151.png" alt="Screenshot 2022-10-27 at 8.43.51 AM.png" width="800" height="425"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg3bjyz5yphefio1mmnvy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg3bjyz5yphefio1mmnvy.png" alt="Screenshot 2022-10-27 at 8.43.33 AM.png" width="800" height="678"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  CI/CD Strategies
&lt;/h1&gt;

&lt;p&gt;There are a few factors when determining your own CI/CD strategy for this. Right now everything is automatically updated when we do CDK deploys... but the CDK deploys themselves aren't automated.&lt;/p&gt;

&lt;p&gt;The big factor is how many things are you tracking. At work we monitor 30+ AWS accounts used by &amp;gt; 100 developers. That runs the risk of being too much to do something like having the watched events kick off a deployment pipeline. Instead we'll likely use a scheduled CI/CD build to periodically update the documentation.&lt;/p&gt;

&lt;p&gt;For CI/CD you could:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Create a CodeBuild/CodePipeline project to automatically deploy the EventCatalog based on "watched" events.&lt;/li&gt;
&lt;li&gt;Connect your normal CI/CD up to a schedule (maybe you have a lot of events from many accounts and only want to update documentation every hour or so).&lt;/li&gt;
&lt;li&gt;Continue manually deploying it &lt;em&gt;(which is what I'll do for my personal account since I don't deploy there often)&lt;/em&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  What's Next?
&lt;/h1&gt;

&lt;p&gt;&lt;a href="https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-service-event.html" rel="noopener noreferrer"&gt;AWS Service Events&lt;/a&gt; offer a lot of useful insights in to your applications deployed to AWS.&lt;/p&gt;

&lt;p&gt;💡Want to see what other Service Events are available? &lt;a href="https://us-east-1.console.aws.amazon.com/events/home?region=us-east-1#/explore" rel="noopener noreferrer"&gt;Check out the EventBridge "Explore" page in the console&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh4txc4xgjceooa0k6yq8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh4txc4xgjceooa0k6yq8.png" alt="Screenshot 2022-10-26 at 9.32.41 AM.png" width="800" height="611"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;From here you could:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;You could extend the schemas by &lt;a href="https://matt.martz.codes/improving-eventbridge-schema-discovery" rel="noopener noreferrer"&gt;Improving EventBridge Schema Discovery&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Maybe you want to store additional information from GitHub webhooks into DynamoDB&lt;/li&gt;
&lt;li&gt;Track EventBridge Rule changes via CloudFormation deployments&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;em&gt;What would you do next?&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;🙌 If anything wasn't clear or if you want to be notified on future posts... feel free to hit me up on Twitter &lt;a href="https://twitter.com/martzcodes" rel="noopener noreferrer"&gt;@martzcodes&lt;/a&gt;.&lt;/p&gt;

</description>
    </item>
  </channel>
</rss>
