<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Matthew Wilson</title>
    <description>The latest articles on Forem by Matthew Wilson (@matthewwilson).</description>
    <link>https://forem.com/matthewwilson</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/matthewwilson"/>
    <language>en</language>
    <item>
      <title>Eliminating additional bandwidth charges for multi-zone sites on Vercel</title>
      <dc:creator>Matthew Wilson</dc:creator>
      <pubDate>Fri, 22 Dec 2023 16:03:10 +0000</pubDate>
      <link>https://forem.com/matthewwilson/eliminating-additional-bandwidth-charges-for-multi-zone-sites-on-vercel-1k5a</link>
      <guid>https://forem.com/matthewwilson/eliminating-additional-bandwidth-charges-for-multi-zone-sites-on-vercel-1k5a</guid>
      <description>&lt;p&gt;When deploying applications to any kind of cloud platform, be that AWS, Azure, GCP or even a higher level abstraction like Vercel, monitoring cost is an important aspect of the development lifecycle. We should be continuously trying to reduce our monthly spend whenever possible. In this post, we show how we eliminated additional bandwidth charges on a recent multi-zone Vercel client project.&lt;/p&gt;

&lt;h2&gt;
  
  
  A Multi Zone Project
&lt;/h2&gt;

&lt;p&gt;The client in question has a large Vercel-based application that currently has 10 projects deployed with a routing structure like this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--yEy-PPAq--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://instil.co/static/d5a4e6972f28b3023f9ed5d95354622e/e8950/vercel-charges-multi-zone.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--yEy-PPAq--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://instil.co/static/d5a4e6972f28b3023f9ed5d95354622e/e8950/vercel-charges-multi-zone.png" alt="" width="800" height="326"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;&lt;p&gt;Project structure showing parent project rewriting to several child projects.&lt;/p&gt;&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Next.js/Vercel refer to this kind of setup as a &lt;a href="https://vercel.com/guides/how-can-i-serve-multiple-projects-under-a-single-domain"&gt;“Multi-Zone” Project&lt;/a&gt;.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;With multi zones support, you can merge both these apps into a single one allowing your customers to browse it using a single URL, but you can develop and deploy both apps independently.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;This enabled us to deploy multiple versions of the same project with different configuration but with the feel of a single site to the customers.&lt;/p&gt;
&lt;h2&gt;
  
  
  Multi Zones == Double Billing
&lt;/h2&gt;

&lt;p&gt;There is one issue, however, that both Vercel and Next.js documentation neglects to mention and that is, you will be "double billed" for bandwidth usage in a multi zone project. This is because the rewrites work by fetching and serving the content from the child site. Therefore, both the parent site and the rewritten target would consume bandwidth by serving the same content.&lt;/p&gt;

&lt;p&gt;This is fine if you can safely live within the &lt;a href="https://vercel.com/docs/limits/usage#bandwidth"&gt;1TB limit on bandwidth&lt;/a&gt; but go above this limit and things start to get expensive at $40 per 100GB (at the time of writing this article).&lt;/p&gt;

&lt;p&gt;We decided to see if we could provide the same functionality provided by the rewrites on Vercel but not pay a premium for the privilege.&lt;/p&gt;
&lt;h2&gt;
  
  
  Rewriting the rewrites
&lt;/h2&gt;

&lt;p&gt;Our site used &lt;a href="https://www.cloudflare.com"&gt;Cloudflare&lt;/a&gt; to handle all our DNS needs. Cloudfare has a number of rewrite options but as we needed to make a decision based on a) the URL path and b) rewriting to a specific domain, the only option (as a non enterprise customer) was to use Cloudflare workers (via &lt;a href="https://developers.cloudflare.com/workers/configuration/routing/routes/"&gt;worker routes&lt;/a&gt;).&lt;/p&gt;

&lt;p&gt;The functionality is simple - we configure a worker route for the path that should rewrite to a child project, fetch the content from the child site in our Cloudflare worker and then return it.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ppWtQAwO--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://instil.co/static/bcf299cefd352de7d4be714e8140963a/0a47e/vercel-charges-cloudflare-worker-route.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ppWtQAwO--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://instil.co/static/bcf299cefd352de7d4be714e8140963a/0a47e/vercel-charges-cloudflare-worker-route.png" alt="" width="600" height="312"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;&lt;p&gt;Configuring a Cloudflare worker route, with a wildcard based path and which worker to execute&lt;/p&gt;&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Worker routes have wildcard support as &lt;a href="https://developers.cloudflare.com/workers/configuration/routing/routes/#matching-behavior"&gt;documented here&lt;/a&gt;, so it’s easy to add a catch-all route for a specific path.&lt;/p&gt;

&lt;p&gt;The code in the url rewrite worker is simple too:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="k"&gt;default&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="nf"&gt;fetch&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;originalRequest&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;env&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;ctx&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;originalUrl&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;URL&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;originalRequest&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;url&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;newHost&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;child-project.co.uk&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;newUrl&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;originalRequest&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;url&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;replace&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;originalUrl&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;host&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;newHost&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;request&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Request&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;newUrl&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="na"&gt;body&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;originalRequest&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;body&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="na"&gt;headers&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;originalRequest&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;headers&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="na"&gt;method&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;originalRequest&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;method&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="na"&gt;redirect&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;originalRequest&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;redirect&lt;/span&gt;
    &lt;span class="p"&gt;});&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;fetch&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;request&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="p"&gt;},&lt;/span&gt;
&lt;span class="p"&gt;};&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This snippet simply fetches the content from &lt;code&gt;child-project.co.uk&lt;/code&gt; but maintains the path, headers, body etc.&lt;/p&gt;

&lt;h2&gt;
  
  
  Let's talk cost
&lt;/h2&gt;

&lt;p&gt;With Cloudflare workers you don’t pay &lt;strong&gt;any&lt;/strong&gt; egress fees. So this simple hack will immediately save you the $40 per 100GB bandwidth cost. For our client, that represented a total saving of &lt;strong&gt;$400 per month&lt;/strong&gt; based on an average monthly usage of 2TB. (Remember the first 1TB is free and Vercel will round the 1.9TB bandwidth usage up to 2TB!)&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--v9GLjK1O--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://instil.co/static/ae19e7c8084e30a032e20e4e2ee46296/84a78/vercel-charges-bandwidth-usage.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--v9GLjK1O--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://instil.co/static/ae19e7c8084e30a032e20e4e2ee46296/84a78/vercel-charges-bandwidth-usage.png" alt="" width="800" height="355"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;&lt;p&gt;Total bandwidth usage for the project&lt;/p&gt;&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Looking across 7 days of requests on Cloudflare for our busiest child project, you can see that our little workers are handling plenty of traffic. This particular site serves double the traffic of all our other child projects combined.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--NCiF8dfy--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://instil.co/static/1d2e2193d3d0cb6f3e4311ad768bd22f/e8950/vercel-charges-cloudflare-worker-summary.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--NCiF8dfy--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://instil.co/static/1d2e2193d3d0cb6f3e4311ad768bd22f/e8950/vercel-charges-cloudflare-worker-summary.png" alt="" width="800" height="491"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;&lt;p&gt;Graph showing how many times the worker has been invoked over a 7 day period.&lt;/p&gt;&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;The costs for these requests are incredibly cheap as well. When we plugged these numbers into the &lt;a href="https://pricing.ceru.dev/?rpm=4500000&amp;amp;spr=0.5&amp;amp;url="&gt;Unofficial Cloudflare Workers' pricing calculator&lt;/a&gt;, the weekly cost for this site was $5.53. The cost for all sites works out at about $50 per month.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;In conclusion, by using Cloudflare workers and configuring worker routes, we were able to replicate the rewrite functionality provided by Vercel without incurring any egress fees. This resulted in significant cost savings for our customer, with an estimated &lt;strong&gt;reduction of 87.5% in monthly costs&lt;/strong&gt;. With Cloudflare Workers free egress fees, our customer saved $40 per 100GB of bandwidth. This cost-effective approach allowed us to efficiently handle high traffic volumes while keeping cost low.&lt;/p&gt;

</description>
      <category>vercel</category>
      <category>cloudflare</category>
    </item>
    <item>
      <title>A beginner's guide to AWS Best Practices</title>
      <dc:creator>Matthew Wilson</dc:creator>
      <pubDate>Wed, 13 Dec 2023 16:00:57 +0000</pubDate>
      <link>https://forem.com/aws-builders/a-beginners-guide-to-aws-best-practices-6j</link>
      <guid>https://forem.com/aws-builders/a-beginners-guide-to-aws-best-practices-6j</guid>
      <description>&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/HHJZmKFRwtc"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;p&gt;I was recently asked to run a session on “AWS Best Practices” with some engineers who were starting their careers at Instil.&lt;/p&gt;

&lt;p&gt;The topic itself is huge and would be impossible to distil down to just a few points, so this post covers a few of the obvious choices and how you can find out more information.&lt;/p&gt;

&lt;h2&gt;
  
  
  Security
&lt;/h2&gt;

&lt;p&gt;This should be the obvious first choice. When you are developing cloud applications, security should be at the very forefront of your mind. Security is a never-ending game of cat and mouse, it cannot be a checkbox exercise that is marked as “done”, as your application evolves so must your security posture. Here are a few key points to consider from the outset.&lt;/p&gt;

&lt;h3&gt;
  
  
  Understand the Shared Responsibility Model
&lt;/h3&gt;

&lt;p&gt;Thankfully with AWS, part of the security burden has already been taken care of for you. This is explained with what AWS call the “&lt;a href="https://aws.amazon.com/compliance/shared-responsibility-model/" rel="noopener noreferrer"&gt;Shared Responsibility Model&lt;/a&gt;”.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Finstil.co%2Fstatic%2Fe3255d8c4fdf0c7dd96b03c796dfc525%2F882b9%2Faws-best-practices-shared-responsibility.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Finstil.co%2Fstatic%2Fe3255d8c4fdf0c7dd96b03c796dfc525%2F882b9%2Faws-best-practices-shared-responsibility.webp"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;This is usually summarised down to:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;AWS responsibility is the “Security of the Cloud”, the customer responsibility is “Security in the Cloud”&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;This is an important concept to understand. You can minimise your responsibility by choosing Serverless or managed services, but there will always be some level of responsibility as a customer of AWS.&lt;/p&gt;

&lt;h3&gt;
  
  
  Secure your Root Account
&lt;/h3&gt;

&lt;p&gt;This might not be something you ever have to do, normally account creation will be taken care of by your organisation (and would hopefully be automated) but I thought it was worthwhile sharing as you might be setting up a personal AWS account for some training, experimentation or even for certifications. &lt;/p&gt;

&lt;p&gt;When setting up a brand new AWS account you will start by creating a “root user”. It can be tempting to then start using this user for deploying and managing your application, however the root user has “the keys to the kingdom” - if it is compromised in any way there is the potential to lose everything in that account, rack up an enormous AWS bill and potentially expose further information about your organisation.&lt;/p&gt;

&lt;p&gt;While securing your root account isn't a magic bullet, you should consider it part of the “&lt;a href="https://en.wikipedia.org/wiki/Swiss_cheese_model" rel="noopener noreferrer"&gt;Swiss cheese&lt;/a&gt;” approach to security:  &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fllbqx4ikkb91bie0wlrs.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fllbqx4ikkb91bie0wlrs.png" alt="Image of the shared responsibility model"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;The Swiss cheese model of accident causation illustrates that, although many layers of defense lie between hazards and accidents, there are flaws in each layer that, if aligned, can allow the accident to occur. In this diagram, three hazard vectors are stopped by the defences, but one passes through where the "holes" are lined up.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;AWS Community Builder &lt;a href="https://twitter.com/andmoredev" rel="noopener noreferrer"&gt;Andres Moreno&lt;/a&gt; has written a &lt;a href="https://www.andmore.dev/blog/stop-using-aws-root-user/" rel="noopener noreferrer"&gt;great post&lt;/a&gt; that covers the recommended settings for your root account, which they have summarised as:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Do not create access keys for the root user. Create an IAM user for yourself with administrative permissions.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Never share the root user credentials.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Use a strong password. (Use a password manager if possible)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Enable multi-factor authentication.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I would also add that these steps should be followed for any subsequent account creations and &lt;a href="https://docs.aws.amazon.com/singlesignon/latest/userguide/how-to-configure-mfa-device-enforcement.html" rel="noopener noreferrer"&gt;enforced&lt;/a&gt; as well.&lt;/p&gt;

&lt;p&gt;In a multi account setup, AWS &lt;a href="https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_accounts_access.html#orgs_manage_accounts_access-as-root" rel="noopener noreferrer"&gt;automatically creates&lt;/a&gt; a strong password and does not give you details of that password by default specifically to discourage use of the root user. You can then use &lt;a href="https://docs.aws.amazon.com/organizations/latest/userguide/best-practices_member-acct.html#bp_member-acct_use-scp" rel="noopener noreferrer"&gt;SCPs to deny all actions&lt;/a&gt; for the root user in a member account.&lt;/p&gt;

&lt;h3&gt;
  
  
  Principle of least privilege (POLP)
&lt;/h3&gt;

&lt;p&gt;While securing your account is an important first step in setting up your infrastructure, the Principle of least privilege is an important aspect to consider during the entire software development lifecycle.&lt;/p&gt;

&lt;p&gt;The idea here is that a &lt;a href="https://docs.aws.amazon.com/IAM/latest/UserGuide/intro-structure.html#intro-structure-principal" rel="noopener noreferrer"&gt;&lt;em&gt;principal&lt;/em&gt;&lt;/a&gt; should only be able to perform the specific actions and access the specific resources that is required for it to function correctly.&lt;/p&gt;

&lt;p&gt;More info can be found in the Well Architected framework - &lt;a href="https://docs.aws.amazon.com/wellarchitected/latest/framework/sec_permissions_least_privileges.html#" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/wellarchitected/latest/framework/sec_permissions_least_privileges.html#&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Practically speaking, Infrastructure as Code (IAC) frameworks like CDK (&lt;a href="https://docs.aws.amazon.com/cdk/v2/guide/home.html" rel="noopener noreferrer"&gt;Cloud Development Kit&lt;/a&gt;) make this very easy to implement. In fact it’s considered a &lt;a href="https://docs.aws.amazon.com/cdk/v2/guide/best-practices.html#best-practices-apps" rel="noopener noreferrer"&gt;best practice of CDK itself&lt;/a&gt; to use their built in &lt;code&gt;grant&lt;/code&gt; functions. For example:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;This single line adds a policy to the Lambda function's role (which is also created for you). That role and it's policies are more than a dozen lines of CloudFormation that you don't have to write. The AWS CDK grants only the minimal permissions required for the function to read from the bucket.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  Handling Secrets
&lt;/h3&gt;

&lt;p&gt;It is highly likely that you will have to handle some kind of secret when developing your application. To quote &lt;a href="https://www.lastweekinaws.com/blog/handling-secrets-with-aws/" rel="noopener noreferrer"&gt;Corey Quinn&lt;/a&gt;:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;“Let’s further assume that you’re not a dangerous lunatic who &lt;a href="https://security.web.cern.ch/recommendations/en/password_alternatives.shtml" rel="noopener noreferrer"&gt;hardcodes those secrets into your application code&lt;/a&gt;.”&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;AWS gives two sensible options - &lt;a href="https://docs.aws.amazon.com/systems-manager/latest/userguide/systems-manager-parameter-store.html" rel="noopener noreferrer"&gt;Systems Manager Parameter Store&lt;/a&gt; or &lt;a href="https://aws.amazon.com/secrets-manager/" rel="noopener noreferrer"&gt;AWS Secrets Manager&lt;/a&gt;. There are pros and cons to both options, but understand that Parameter Store is just as secure as Secrets Manager, provided you use the &lt;code&gt;SecureString\&lt;/code&gt; parameter type. &lt;a href="https://www.lastweekinaws.com/blog/handling-secrets-with-aws/" rel="noopener noreferrer"&gt;More info here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;In addition to choosing your secret service (pun intended) here’s a few more pointers:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Use your IAC framework to create and maintain secrets for you.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Ensure that secrets are not shared across environments.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Never store secrets in plaintext on developer machines.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Never include secrets in log messages. (You can configure Amazon Macie to &lt;a href="https://docs.aws.amazon.com/macie/latest/user/managed-data-identifiers.html" rel="noopener noreferrer"&gt;automatically detect credentials&lt;/a&gt;)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Rotate your secrets where possible, and again let the &lt;a href="https://docs.aws.amazon.com/cdk/api/v2/docs/aws-cdk-lib.aws_secretsmanager.SecretRotation.html" rel="noopener noreferrer"&gt;services handle this for you&lt;/a&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Encryption
&lt;/h3&gt;

&lt;p&gt;When handling sensitive data it’s important to protect it. One technique you can use is encryption, both while the data is in &lt;a href="https://docs.aws.amazon.com/wellarchitected/latest/security-pillar/sec_protect_data_transit_encrypt.html" rel="noopener noreferrer"&gt;transit&lt;/a&gt; and at &lt;a href="https://docs.aws.amazon.com/wellarchitected/latest/security-pillar/sec_protect_data_rest_encrypt.html" rel="noopener noreferrer"&gt;rest&lt;/a&gt;. It may surprise you that for many services like S3 and DynamoDB, encryption at rest is &lt;em&gt;disabled&lt;/em&gt; by default.&lt;/p&gt;

&lt;p&gt;Thankfully using tools like &lt;a href="https://github.com/cdklabs/cdk-nag" rel="noopener noreferrer"&gt;cdc-nag&lt;/a&gt; you can help enforce some best practices around this and enable encryption from the beginning.&lt;/p&gt;

&lt;h2&gt;
  
  
  Cost
&lt;/h2&gt;

&lt;p&gt;It’s becoming increasingly apparent that cloud developers need to understand the cost of the changes they are making. AWS provide a &lt;a href="https://docs.aws.amazon.com/cost-management/latest/userguide/what-is-costmanagement.html" rel="noopener noreferrer"&gt;number of tools&lt;/a&gt; to help you analyse the cost of your application when it is up and running. However, cost should also be factored into the design stage of the development process as well.&lt;/p&gt;

&lt;p&gt;Cost has become such an important factor in architecting cloud applications that it featured heavily in Amazon CTO Werner Vogels’ &lt;a href="https://www.youtube.com/watch?v=UTRBVPvzt9w" rel="noopener noreferrer"&gt;keynote at re:Invent 2023&lt;/a&gt;. In it he shared &lt;a href="https://thefrugalarchitect.com" rel="noopener noreferrer"&gt;The Frugal Architect&lt;/a&gt; which contains some "Simple laws for building cost-aware, sustainable, and modern architectures."&lt;/p&gt;

&lt;p&gt;When we talk about cost, yes we should consider the cost of the service itself, but we should also consider the &lt;a href="https://www.readysetcloud.io/blog/allen.helton/understanding-tco-of-serverless-and-container-applications/" rel="noopener noreferrer"&gt;Total Cost of Ownership&lt;/a&gt; as well.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Familiarise yourself and learn how to read the pricing tables of various services on AWS.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Create budget alerts to avoid unexpected bills.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Include &lt;a href="https://docs.aws.amazon.com/wellarchitected/latest/cost-optimization-pillar/design-principles.html" rel="noopener noreferrer"&gt;cost optimisation&lt;/a&gt; as part of your design stage.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Get comfortable with using &lt;a href="https://aws.amazon.com/aws-cost-management/aws-cost-explorer/" rel="noopener noreferrer"&gt;Cost Explorer&lt;/a&gt; - when costs spike it's important to understand how to drill down into costs.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Serverless First
&lt;/h2&gt;

&lt;p&gt;Serverless has grown to mean many things and can even mean something different depending on &lt;a href="https://aws.amazon.com/serverless/" rel="noopener noreferrer"&gt;whose cloud you are paying for&lt;/a&gt;. But for our team and for the purpose of this article, serverless is a way of building and running applications without having to manage the underlying infrastructure on AWS.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Of course&lt;/em&gt; there are still servers running in a data centre somewhere but those servers are not our responsibility. With Serverless we as developers can focus on our core product instead of worrying about managing and operating servers or runtimes.&lt;/p&gt;

&lt;p&gt;By choosing “Serverless” or “Managed Services” as much as possible we allow AWS to do the “&lt;a href="https://docs.aws.amazon.com/wellarchitected/latest/framework/cost-dp.html" rel="noopener noreferrer"&gt;undifferentiated heavy lifting&lt;/a&gt;” for us. These are things that are important for the product to function but do not differentiate it from your competitors.&lt;/p&gt;

&lt;p&gt;The benefits of Serverless are clear:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;No infrastructure provisioning or maintenance&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Automatic scaling&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Pay for what you use&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Highly available and secure&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;What we don't want to be is &lt;em&gt;dogmatic&lt;/em&gt;, it’s very tempting to be “Serverless Only”, however in certain circumstances this can lead to increased complexity (for no reason) and increased cost (especially at a certain scale).&lt;/p&gt;

&lt;p&gt;Serverless first means to treat it as the first step, use it to make a start and iterate quickly but don't be sad if it’s not a good fit for certain customers or workloads.&lt;/p&gt;

&lt;p&gt;Furthermore it’s important to treat any decision you make as a “two way door”, architecture decisions are never final, if something doesn't work or starts to cost too much, change it!&lt;/p&gt;

&lt;h2&gt;
  
  
  Master the basics
&lt;/h2&gt;

&lt;p&gt;It’s entirely possible that you land on a Serverless project and avoid having to do some of the &lt;em&gt;undifferentiated heavy lifting&lt;/em&gt; we have already eluded to. But this doesn't mean you should be ignorant to it. Networking concepts and core AWS services should be understood, even if they are not part of the “Serverless” offering.&lt;/p&gt;

&lt;p&gt;AWS Certifications are a great way to be exposed to these kinds of topics. Check out my colleague &lt;a href="https://instil.co/blog/6-steps-to-becoming-an-aws-solutions-architect" rel="noopener noreferrer"&gt;Tom’s post for more info&lt;/a&gt;) but as a start I cannot recommend the free “&lt;a href="https://learn.cantrill.io/p/tech-fundamentals" rel="noopener noreferrer"&gt;Tech Fundamentals&lt;/a&gt;” course by Adrian Cantrill enough. It is not AWS specific but covers fundamental knowledge that will help you design, develop and debug applications in the cloud.&lt;/p&gt;

&lt;h2&gt;
  
  
  Automate everything
&lt;/h2&gt;

&lt;p&gt;We’ve mentioned Infrastructure as code a number of times already. Using IaC frameworks ensures that we have consistency and repeatability in resource provisioning and management. While you might not be dealing with physical servers, I find the &lt;a href="https://martinfowler.com/bliki/SnowflakeServer.html" rel="noopener noreferrer"&gt;Snowflake Server&lt;/a&gt; concept a really nice way of reminding myself why we spend time automating deployments.&lt;/p&gt;

&lt;p&gt;Snowflakes are unique:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;good for a ski resort, bad for a data center.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;We’ve got quite a bit of experience using CDK across a number of projects, so I would recommend it as a great place to start. There are a number of posts on our blog which help cover some fundamental aspects:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;CDK Lessons learned - &lt;a href="https://instil.co/blog/cdk-lessons-learned/" rel="noopener noreferrer"&gt;https://instil.co/blog/cdk-lessons-learned/&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Improving the CDK development cycle - &lt;a href="https://instil.co/blog/improving-cdk-development-cycle/" rel="noopener noreferrer"&gt;https://instil.co/blog/improving-cdk-development-cycle/&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;An added benefit of CDK is that there are plenty of ready made &lt;a href="https://constructs.dev" rel="noopener noreferrer"&gt;constructs&lt;/a&gt; and &lt;a href="https://serverlessland.com/patterns?framework=CDK" rel="noopener noreferrer"&gt;patterns&lt;/a&gt; for you to re-use on your projects.&lt;/p&gt;

&lt;h2&gt;
  
  
  Well Architected Framework
&lt;/h2&gt;

&lt;p&gt;The &lt;a href="https://aws.amazon.com/architecture/well-architected/?wa-lens-whitepapers.sort-by=item.additionalFields.sortDate&amp;amp;wa-lens-whitepapers.sort-order=desc&amp;amp;wa-guidance-whitepapers.sort-by=item.additionalFields.sortDate&amp;amp;wa-guidance-whitepapers.sort-order=desc" rel="noopener noreferrer"&gt;Well Architected Framework&lt;/a&gt; could be considered “the mother of all AWS best practice guides”. It breaks down what a well architected application looks like using 6 pillars:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Operational Excellence&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Security&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Reliability&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Performance Efficiency&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Cost Optimisation&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Sustainability&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;While I hope you have found this post useful, I would recommend you use it as a start, and then branch into the Well Architected Pillars for further reading and then apply what you have learnt to your projects.&lt;/p&gt;

&lt;p&gt;There is also a &lt;a href="https://aws.amazon.com/well-architected-tool/" rel="noopener noreferrer"&gt;Well Architected Tool&lt;/a&gt; that can help you assess your application against these pillars. It’s important to note that this is not a “checklist” and you should not aim for a “perfect score”. Instead it should be used as a guide to help you identify areas of improvement.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>beginners</category>
    </item>
    <item>
      <title>Testing Serverless Applications on AWS</title>
      <dc:creator>Matthew Wilson</dc:creator>
      <pubDate>Thu, 02 Nov 2023 19:28:17 +0000</pubDate>
      <link>https://forem.com/aws-builders/testing-serverless-applications-on-aws-h06</link>
      <guid>https://forem.com/aws-builders/testing-serverless-applications-on-aws-h06</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;The topic of Serverless testing is a hot one at the moment - there are many different approaches and opinions on how best to do it. In this post, I'm going to share some advice on how &lt;em&gt;we&lt;/em&gt; tackled this problem, what the benefits are of our approach and how things could be improved.&lt;/p&gt;

&lt;p&gt;The project in question is &lt;a href="https://dev.to/case-studies/stroll-insurance/"&gt;Stroll Insurance&lt;/a&gt;, a fully Serverless application running on AWS. In &lt;a href="https://dev.to/blog/zero-to-serverless-car-insurance-part-1/"&gt;previous posts&lt;/a&gt;, we have covered some of the general lessons learnt from this project but in this post we are going to focus on testing.&lt;/p&gt;

&lt;p&gt;For context; the web application is built with &lt;a href="https://dev.to/courses/introduction-to-react-course/"&gt;React&lt;/a&gt; and &lt;a href="https://dev.to/courses/introduction-to-typescript-course/"&gt;TypeScript&lt;/a&gt; which makes calls to an &lt;a href="https://aws.amazon.com/appsync/" rel="noopener noreferrer"&gt;AppSync&lt;/a&gt; API that makes use of the &lt;a href="https://docs.aws.amazon.com/appsync/latest/devguide/data-source-components.html" rel="noopener noreferrer"&gt;Lambda and DynamoDB datasources&lt;/a&gt;. We use &lt;a href="https://aws.amazon.com/step-functions/" rel="noopener noreferrer"&gt;Step Functions&lt;/a&gt; to orchestrate the flow of events for complex processing like purchasing and renewing policies, and we use &lt;a href="https://aws.amazon.com/s3" rel="noopener noreferrer"&gt;S3&lt;/a&gt; and &lt;a href="https://aws.amazon.com/sqs/" rel="noopener noreferrer"&gt;SQS&lt;/a&gt; to process document workloads.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Testing 'Triangle'
&lt;/h2&gt;

&lt;p&gt;When the project started, it relied heavily on unit testing. This isn't necessarily a bad thing but we needed a better balance between getting features delivered and maintaining quality. Our &lt;a href="https://martinfowler.com/articles/practical-test-pyramid.html" rel="noopener noreferrer"&gt;testing ~pyramid~&lt;/a&gt; triangle looked like this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Finstil.co%2Fstatic%2Fa1bde71065edebb133573adba4cafb6e%2Fa5d4d%2Ftesting-serverless-applications-triangle.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Finstil.co%2Fstatic%2Fa1bde71065edebb133573adba4cafb6e%2Fa5d4d%2Ftesting-serverless-applications-triangle.webp"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Essentially, we had an abundance of unit tests and very few E2E tests. This worked really well for the initial stages of the project but as the product and AWS footprint grew in complexity, we could see that a number of critical parts of the application had no test coverage. Specifically, we had no tests for:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Direct service integrations&lt;/strong&gt; used by AppSync resolvers + StepFunctions&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Event driven flows&lt;/strong&gt; like document processing&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These underpin critical parts of the application. If they stop working, people will unable to purchase insurance!&lt;/p&gt;

&lt;p&gt;One problem that we kept experiencing was unit tests would continue to pass after a change to a lambda function but subsequent deployments would fail. Typically, this was because the developer had forgotten to update the permissions in CDK. As a result, we created a rule that everyone had to deploy and test their changes locally first, in their own AWS sandbox, before merging. This worked but it was an additional step that could easily be forgotten, especially when under pressure or time constraints.&lt;/p&gt;

&lt;h2&gt;
  
  
  Balancing the Triangle
&lt;/h2&gt;

&lt;p&gt;So, we agreed that it was time to address the elephant in the room. Where are the integration tests?&lt;/p&gt;

&lt;p&gt;Our motivation was simple:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;There is a lack of confidence when we deploy to production, meaning we perform a lot of manual checks before we deploy and sometimes these don't catch everything.&lt;/p&gt;

&lt;p&gt;This increases our lead time and reduces our deployment frequency. We would like to invert this.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The benefits for our client were clear:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Features go live quicker&lt;/strong&gt;, reducing time to market but while still maintaing quality&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Gives them a competitive edge&lt;/li&gt;
&lt;li&gt;Reduces feedback loop, enabling iteration on ideas over a shorter period of time&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;p&gt;&lt;strong&gt;Critical aspects of the application are tested&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Issues can be diagnosed quicker&lt;/li&gt;
&lt;li&gt;Complex bugs can be reproduced&lt;/li&gt;
&lt;li&gt;Ensures / Reduces no loss of business&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;Integration testing can mean a lot of different things to different teams, so our definition was this:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;An integration test in this project is defined as one that validates integrations with AWS services (e.g. DynamoDB, S3, SQS, etc) but &lt;em&gt;not&lt;/em&gt; third parties. These should be mocked out instead.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Breaking down the problem
&lt;/h2&gt;

&lt;p&gt;We decided to start small by first figuring out how to test a few critical paths that we had caused us issues in the past. We made a list of how we “trigger” a workload:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  S3 → SQS → Lambda&lt;/li&gt;
&lt;li&gt;  DynamoDB Stream → SNS → Lambda&lt;/li&gt;
&lt;li&gt;  SQS → Lambda&lt;/li&gt;
&lt;li&gt;  StepFunction → Lambda&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The pattern that emerged was that we have an event that flows through a messaging queue, primarily SQS and SNS. There are a number of comments we can make about this:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  There's no real business logic to test until a Lambda function or a State Machine is executed but we still want to test that everything is hooked up correctly.&lt;/li&gt;
&lt;li&gt;  We have most control over the Lambda functions, it will be easier to control the test setup in there.&lt;/li&gt;
&lt;li&gt;  We want to be able to put a function or a State Machine into “test mode” so that it will know when to make mocked calls to third parties.&lt;/li&gt;
&lt;li&gt;  We want to keep track of test data that is created so we can clean it up afterwards.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Setting the Test Context
&lt;/h2&gt;

&lt;p&gt;One of the most critical parts of the application is how we process insurance policy documents. This has enough complexity to be able to develop a good pattern for writing our tests so that other engineers could build upon it in the future. This was the first integration test we were going to write.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Finstil.co%2Fstatic%2Fd4f8d55540e8b2e166b202134a695887%2Fa5d4d%2Ftesting-serverless-applications-flow.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Finstil.co%2Fstatic%2Fd4f8d55540e8b2e166b202134a695887%2Fa5d4d%2Ftesting-serverless-applications-flow.webp"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The flow is like this:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  File is uploaded to S3 bucket&lt;/li&gt;
&lt;li&gt;  This event is placed onto an SQS queue with a Lambda trigger&lt;/li&gt;
&lt;li&gt;  The Lambda function reads the PDF metadata and determines who the document belongs to.&lt;/li&gt;
&lt;li&gt;  It fetches some data from a third party API relating to the policy and updates a DynamoDB table.&lt;/li&gt;
&lt;li&gt;  File is moved to another bucket for further processing.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;We wanted to assert that:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; The file no longer exists in the source bucket&lt;/li&gt;
&lt;li&gt; The DynamoDB table was updated with the correct data&lt;/li&gt;
&lt;li&gt; The file exists in the destination bucket&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;This would be an incredibly valuable test. Not only does it verify that the workload is behaving correctly, it also verifies that the deployed infrastructure is working properly and that it has the correct permissions.&lt;/p&gt;

&lt;p&gt;For this to work, we needed to make the Lambda Function &lt;em&gt;aware&lt;/em&gt; that it was running as part of a test so that it would use a mocked response instead. The solution that we came up with was to attach some additional &lt;a href="https://docs.aws.amazon.com/AmazonS3/latest/userguide/UsingMetadata.html" rel="noopener noreferrer"&gt;metadata to the object&lt;/a&gt; when it was uploaded at the start of test case - an &lt;code&gt;is-test&lt;/code&gt; flag:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Finstil.co%2Fstatic%2F4c4d96be0d6fba9d0e11dd2aa49cf489%2Fa5d4d%2Ftesting-serverless-applications-metadata.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Finstil.co%2Fstatic%2F4c4d96be0d6fba9d0e11dd2aa49cf489%2Fa5d4d%2Ftesting-serverless-applications-metadata.webp"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If the S3 object is moved to another bucket as part of its processing then we also copy its metadata. The metadata is never lost even in more complex or much larger end-to-end workflows. &lt;/p&gt;

&lt;h2&gt;
  
  
  The Middy Touch
&lt;/h2&gt;

&lt;p&gt;Adding the &lt;code&gt;is-test&lt;/code&gt; flag to our object metadata gave us our way of passing some kind of test context into our workload. The next step was to make the Lambda Function capable of &lt;em&gt;discovering&lt;/em&gt; the context and then using that to control how it &lt;em&gt;behaves&lt;/em&gt; under test. For this we used &lt;a href="https://middy.js.org" rel="noopener noreferrer"&gt;Middy&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;If you're not familiar, Middy is a middleware framework specifically designed for Lambda functions. Essentially it allows you to wrap your handler code up so that you can do some &lt;code&gt;before&lt;/code&gt; and &lt;code&gt;after&lt;/code&gt; processing. I'm not going to do a Middy deep dive here but the &lt;a href="https://middy.js.org/docs/intro/how-it-works" rel="noopener noreferrer"&gt;documentation&lt;/a&gt; is great if you haven't used it before.&lt;/p&gt;

&lt;p&gt;We were already using Middy for various different things so it was a great place to do some checks before we execute our handler. The logic is simple:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;In the before&lt;/strong&gt; phase of the middleware, check for the &lt;code&gt;is-test&lt;/code&gt; flag in the object's metadata and if &lt;code&gt;true&lt;/code&gt;, set a global test context so that the handler is aware it's running as part of a test. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;In the after&lt;/strong&gt; phase (which is triggered after the handler is finished), clear the context to avoid any issues for subsequent invocations of the warmed up function:&lt;br&gt;
&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;S3SqsEventIntegrationTestHandler&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;logger&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;Logger&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt; &lt;span class="nx"&gt;middy&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;MiddlewareObj&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;

  &lt;span class="c1"&gt;// this happens before our handler is invoked.&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="na"&gt;before&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;middy&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;MiddlewareFn&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;async &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="na"&gt;request&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;middy&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;Request&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;SQSEvent&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt; &lt;span class="nb"&gt;Promise&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="k"&gt;void&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;objectMetadata&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;getObjectMetadata&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;request&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;event&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;isIntegrationTest&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;objectMetadata&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;some&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;metadata&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nx"&gt;metadata&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;is-test&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;===&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;true&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

    &lt;span class="nf"&gt;setTestContext&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;&lt;span class="nx"&gt;isIntegrationTest&lt;/span&gt;&lt;span class="p"&gt;});&lt;/span&gt;
  &lt;span class="p"&gt;};&lt;/span&gt;

  &lt;span class="c1"&gt;// this happens after the handler is invoked.&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="na"&gt;after&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;middy&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;MiddlewareFn&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;():&lt;/span&gt; &lt;span class="k"&gt;void&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nf"&gt;setTestContext&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;&lt;span class="na"&gt;isIntegrationTest&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;&lt;span class="p"&gt;});&lt;/span&gt;
  &lt;span class="p"&gt;};&lt;/span&gt;

  &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;before&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="nx"&gt;after&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="nx"&gt;onError&lt;/span&gt;
  &lt;span class="p"&gt;};&lt;/span&gt;
&lt;span class="p"&gt;};&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here's the test context code. It follows a simple TypeScript pattern to make the context read-only:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="kr"&gt;interface&lt;/span&gt; &lt;span class="nx"&gt;TestContext&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nl"&gt;isIntegrationTest&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;boolean&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;_testContext&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;TestContext&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="na"&gt;isIntegrationTest&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;
&lt;span class="p"&gt;};&lt;/span&gt;

&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;testContext&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;Readonly&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;TestContext&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;_testContext&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;setTestContext&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;updatedContext&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;TestContext&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt; &lt;span class="k"&gt;void&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;_testContext&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;isIntegrationTest&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;updatedContext&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;isIntegrationTest&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="p"&gt;};&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;I think this is the hardest part about solving the Serverless testing “problem”. I believe the correct way to do this is in a real AWS environment, not a local simulator and just making that deployed code &lt;em&gt;aware&lt;/em&gt; that it is running as part of a test is the trickiest part. Once you have some kind of pattern for that, the rest is straightforward enough.&lt;/p&gt;

&lt;p&gt;We then built upon this pattern for each of our various triggers, building up a set of middleware handlers for each trigger type.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Finstil.co%2Fstatic%2F7166d8833df84a2d525f13a7e64f085d%2Fa5d4d%2Ftesting-serverless-applications-code1.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Finstil.co%2Fstatic%2F7166d8833df84a2d525f13a7e64f085d%2Fa5d4d%2Ftesting-serverless-applications-code1.webp"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For our S3 middleware we pass the &lt;code&gt;is-test&lt;/code&gt; flag in an object's metadata, but for SQS and SNS we pass the flag using &lt;a href="https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-message-metadata.html#sqs-message-attributes" rel="noopener noreferrer"&gt;message attributes&lt;/a&gt;. &lt;/p&gt;

&lt;h2&gt;
  
  
  A note on Step Functions
&lt;/h2&gt;

&lt;p&gt;By far the most annoying trigger to deal with was Lambda Functions invoked by a State Machine task.&lt;/p&gt;

&lt;p&gt;There is no easy way of passing metadata around each of the states in a State Machine - a global state would be really helpful (but would probably be overused and abused by people). The only thing that is globally accessible by each state is the &lt;a href="https://docs.aws.amazon.com/step-functions/latest/dg/input-output-contextobject.html" rel="noopener noreferrer"&gt;Context Object&lt;/a&gt;. &lt;/p&gt;

&lt;p&gt;Our workaround was to use a specific naming convention when the State Machine is executed, with the execution name included in the Context Object and therefore available to every state in the State Machine.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Finstil.co%2Fstatic%2Fa9c841b4365b0ecf37aa604fbd35002a%2Fa5d4d%2Ftesting-serverless-applications-code2.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Finstil.co%2Fstatic%2Fa9c841b4365b0ecf37aa604fbd35002a%2Fa5d4d%2Ftesting-serverless-applications-code2.webp"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For State Machines that are executed by a Lambda Function, we can use our &lt;code&gt;testContext&lt;/code&gt; to prefix all State Machine executions with "IntegrationTest-". This is obviously a bit of a hack, but it does make it easy to spot integration test runs from the execution history of the State Machine.&lt;/p&gt;

&lt;p&gt;We then make sure that the execution name is passed into each Lambda Task and that our middleware is able to read the execution name from the event. (Note that &lt;code&gt;$$&lt;/code&gt; provides access the Context Object).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Finstil.co%2Fstatic%2F9047faa695418f05005b7c4ff191d575%2Fa5d4d%2Ftesting-serverless-applications-code3.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Finstil.co%2Fstatic%2F9047faa695418f05005b7c4ff191d575%2Fa5d4d%2Ftesting-serverless-applications-code3.webp"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Another difficult thing to test with Step Functions is error scenarios. These will often be configured with retry and backoff functionality which can make tests too slow to execute. Thankfully, there is a way around this which my colleague, Tom Bailey, has &lt;a href="https://instil.co/blog/testing-step-functions-locally/" rel="noopener noreferrer"&gt;covered in a great post&lt;/a&gt;. I would recommend giving that a read.&lt;/p&gt;

&lt;h2&gt;
  
  
  Mocking third party APIs
&lt;/h2&gt;

&lt;p&gt;We're now at a point where a Lambda Function is being invoked as part of our workload under test. That function is also aware that its running as part of a test. The next thing we want to do is determine how we can mock the calls to our third party APIs.&lt;/p&gt;

&lt;p&gt;There are a few options here:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Wiremock&lt;/strong&gt;: You could host something &lt;em&gt;like&lt;/em&gt; wiremock in the AWS account and call the mocked API rather than the real one. I've used wiremock quite a bit and it works really well, but can be difficult to maintain as your application grows. Plus it's another thing that you have to deploy and maintain.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;API Gateway&lt;/strong&gt;: Either spin up your own custom API for this or use the built in &lt;a href="https://docs.aws.amazon.com/apigateway/latest/developerguide/how-to-mock-integration.html" rel="noopener noreferrer"&gt;mock integrations&lt;/a&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;DynamoDB&lt;/strong&gt;: This is our current choice. We have a mocked HTTP client that instead of making an HTTP call, queries a DynamoDB table for a mocked response, which has been seeded before the test has run.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Using DynamoDB gave us the flexibility we needed to control what happens for a given API call without having to deploy a bunch of additional infrastructure.&lt;/p&gt;

&lt;h2&gt;
  
  
  Asserting that something has happened
&lt;/h2&gt;

&lt;p&gt;Now it's time to determine if our test has actually passed or failed. A typical test would be structured like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="nf"&gt;it&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;should successfully move documents to the correct place&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="k"&gt;async &lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;seededPolicyData&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;seedPolicyData&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;

    &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;whenDocumentIsUploadedToBucket&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;

    &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;thenDocumentWasDeletedFromBucket&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
    &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;thenDocumentWasMovedToTheCorrectLocation&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
  &lt;span class="p"&gt;});&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;With our assertions making use of the &lt;a href="https://github.com/erezrokah/aws-testing-library/blob/main/src/jest/README.md" rel="noopener noreferrer"&gt;&lt;code&gt;aws-testing-library&lt;/code&gt;&lt;/a&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;thenDocumentWasMovedToTheCorrectLocation&lt;/span&gt;&lt;span class="p"&gt;():&lt;/span&gt; &lt;span class="nb"&gt;Promise&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="k"&gt;void&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;expect&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
    &lt;span class="nx"&gt;region&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;bucket&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;bucketName&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="p"&gt;}).&lt;/span&gt;&lt;span class="nf"&gt;toHaveObject&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;expectedKey&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;aws-testing-library&lt;/code&gt; gives you a set of really useful assertions with built in delays and retries. For example:&lt;/p&gt;

&lt;p&gt;Checking an item exists in DynamoDB:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;expect&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
  &lt;span class="na"&gt;region&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;us-east-1&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;table&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;dynamo-db-table&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;}).&lt;/span&gt;&lt;span class="nf"&gt;toHaveItem&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
    &lt;span class="na"&gt;partitionKey&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;itemId&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Checking an object exists in an S3 bucket:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;expect&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
  &lt;span class="na"&gt;region&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;us-east-1&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;bucket&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;s3-bucket&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;}).&lt;/span&gt;&lt;span class="nf"&gt;toHaveObject&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
  &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;object-key&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;
&lt;span class="p"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Checking if a State Machine is in a given state:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;expect&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
  &lt;span class="na"&gt;region&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;us-east-1&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;stateMachineArn&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;stateMachineArn&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;}).&lt;/span&gt;&lt;span class="nf"&gt;toBeAtState&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;ExpectedState&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It's important to note that because you're testing in a live, distributed system, you will have to allow for cold-starts and other non-deterministic delays when running your tests. It certainly took us a while to get the right balance between retries and timeouts. While at times it has been flakey, the benefits of having these tests far outweigh the occasional test failure.&lt;/p&gt;

&lt;h2&gt;
  
  
  Running the tests
&lt;/h2&gt;

&lt;p&gt;There are two places where these tests get executed, developer machines and on CI.&lt;/p&gt;

&lt;p&gt;Each developer on our team has their own AWS account, they are regularly deploying a full version of the application and running these integration tests against it.&lt;/p&gt;

&lt;p&gt;What I really like to do is get into a test driven development flow where I will write the integration test first and make my code changes, which will be &lt;a href="https://instil.co/blog/improving-cdk-development-cycle/" rel="noopener noreferrer"&gt;hot swapped using CDK&lt;/a&gt; and then run my integration test until it turns green. This would be pretty painful if I was waiting on a full stack to deploy each time, but hot swap works well at reducing the deployment time.&lt;/p&gt;

&lt;p&gt;On CI we run these tests against a development environment after a deployment has finished.&lt;/p&gt;

&lt;h2&gt;
  
  
  It could be better
&lt;/h2&gt;

&lt;p&gt;There are a number of things that we would like to improve upon in this approach&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Temporary Environments&lt;/strong&gt; - We would love to run these tests against temporary environments when a Pull Request is opened.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Test data cleanup&lt;/strong&gt; - Sometimes tests are flaky and don't clean up after themselves properly. We have toyed with the idea of setting a TTL on DynamoDB records when data is created as part of a test.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Run against production&lt;/strong&gt; - We don't run these against production yet, but that is the goal.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Open source the middleware&lt;/strong&gt; - I think more people could make use of the middleware than just us, but we haven't got round to open sourcing it &lt;em&gt;yet.&lt;/em&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;AWS is trying to make it better&lt;/strong&gt; - Serverless testing is a hot topic at the moment, AWS have responded with some great resources you can find here - &lt;a href="https://github.com/aws-samples/serverless-test-samples" rel="noopener noreferrer"&gt;https://github.com/aws-samples/serverless-test-samples&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;p&gt;While there are still some rough edges to our approach, the integration tests really helped with the issues we have already outlined and can be nicely summarised with three of the four key DORA metrics:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Deployment Frequency&lt;/strong&gt; - The team's confidence increased when performing deployments, which increased their frequency.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Lead Time for Changes&lt;/strong&gt; - Less need for manual testing reduced the time it takes for a commit to make it to production.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Change Failure Rate&lt;/strong&gt; - Permissions errors no longer happen in production and bugs are caught sooner in the process. The percentage of deployments causing a failure in production reduced.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/Z_jhircdAPE"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Zero to Serverless Car Insurance - Part 3</title>
      <dc:creator>Matthew Wilson</dc:creator>
      <pubDate>Tue, 18 Apr 2023 16:06:54 +0000</pubDate>
      <link>https://forem.com/aws-builders/zero-to-serverless-car-insurance-part-3-4jaa</link>
      <guid>https://forem.com/aws-builders/zero-to-serverless-car-insurance-part-3-4jaa</guid>
      <description>&lt;p&gt;In &lt;a href="https://dev.to/aws-builders/zero-to-serverless-car-insurance-part-1-cml"&gt;Part 1&lt;/a&gt; and &lt;a href="https://dev.to/aws-builders/zero-to-serverless-car-insurance-part-2-3ba"&gt;2&lt;/a&gt; of this series we've shared some of the technical challenges we faced when building a Serverless car insurance platform.&lt;/p&gt;

&lt;p&gt;In this post we are going to take a look at some ways in which you can help guide your team on their transition to Serverless.&lt;/p&gt;

&lt;p&gt;In the 1970s &amp;amp; 80s, the expression “nobody ever got fired for buying IBM” was coined to illustrate IBMs utter dominance of the IT industry and how executives, who were playing it safe, kept buying IBM.&lt;br&gt;
Rather than look at the competition, which could potentially save them time and money in the long run, they opted for the &lt;em&gt;perceived&lt;/em&gt; safe choice.&lt;/p&gt;

&lt;p&gt;I have seen a similar trend in regards to Serverless, many teams have reluctance when it comes to adopting a Serverless-First approach, despite the benefits being clear and proven:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;No infrastructure provisioning or maintenance&lt;/li&gt;
&lt;li&gt;Automatic scaling&lt;/li&gt;
&lt;li&gt;Pay for what you use&lt;/li&gt;
&lt;li&gt;Highly available and secure&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;There are potentially many reasons for this lack of adoption, but what I want to focus on is fear.&lt;/p&gt;

&lt;p&gt;Fear when it comes to the developer experience, fear when it comes to modelling databases, fear when it comes to cost and scaling.&lt;/p&gt;

&lt;p&gt;When I started on the &lt;a href="https://strollinsurance.co.uk" rel="noopener noreferrer"&gt;Stroll&lt;/a&gt; project I was comfortable with Lambda functions, messaging queues but honestly I was &lt;em&gt;afraid&lt;/em&gt; of DynamoDB. When you work with DynamoDB you are told that “you have to know all your access patterns up front?”. As an engineer you know that it's so easy for requirements to change and needing to know your access patterns seemed like a huge risk.&lt;/p&gt;

&lt;p&gt;I wanted to see if this fear was just something I experienced, or had it impacted other teams as well. So I asked a few of my fellow Serverless AWS Community Builders a question:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;When you first started your Serverless journey what were you, your team or your company afraid of, and how did you overcome it?&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;And I got some great answers:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;We were afraid that getting up to speed with the Serverless paradigm would take ages and that we'd end up investing a lot over a long time before we could reap any benefits.&lt;br&gt;
We overcame this by breaking things down and achieving small wins.&lt;br&gt;
Getting the first Lambda in production only took a couple of days and felt great!&lt;br&gt;
From there, we kept on adding the next small building block, learning what we needed to know as we went.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;blockquote&gt;
&lt;p&gt;On the company that I was working at that time the main fears/blockers were:&lt;br&gt;
Questioning if Serverless would provide value to our systems and customers.&lt;br&gt;
The misconceptions: Serverless is more expensive than X, Serverless is harder to test than X or Serverless is more complex than X.&lt;br&gt;
Education, not only in Serverless but in distributed systems in general.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;blockquote&gt;
&lt;p&gt;This was late 2016/early 2017:&lt;br&gt;
Vendor lock-in - got over it quickly though&lt;br&gt;
That Serverless wouldn't gain market share - that we'd end up with an architecture with no community backing which we'd struggle hiring for&lt;/p&gt;
&lt;/blockquote&gt;




&lt;p&gt;If I circle back to my own fears around DynamoDB, the solution to that problem was education. I educated myself on DynamoDB (Alex DeBrie's book is a great starting point), eliminated the fears that I had and have come to the realisation that planning, discovring and knowing your access patterns upfront is actually a good thing that leads to a better architecture overall.&lt;/p&gt;

&lt;p&gt;One of the things that is key to our success is that we had a &lt;em&gt;confident Serverless team&lt;/em&gt;. This confidence wasn't something we started out with, we had to eliminate the fears and concerns that the team had along the way.&lt;/p&gt;

&lt;p&gt;One of the best things about our jobs is that we start with an empty git repository and turn it into a business for someone, that experience never gets old. What we have learnt is that you can do this better and faster with Serverless.&lt;/p&gt;

&lt;h2&gt;
  
  
  Encouraging the Serverless mindset
&lt;/h2&gt;

&lt;p&gt;One of the first things we had to learn as a team was this notion of a Serverless mindset.&lt;/p&gt;

&lt;p&gt;And if you’re not really in the Serverless world, drinking the Serverless Kool-aid as one of my colleagues would say, I'm sure that “Serverless mindset” isn't even a term you would be familiar with.&lt;br&gt;
I think the origin of this comes from a blog article &lt;a href="https://ben11kehoe.medium.com/serverless-is-a-state-of-mind-717ef2088b42" rel="noopener noreferrer"&gt;written by Ben Kehoe entitled “Serverless is a State of Mind”&lt;/a&gt;&lt;br&gt;
And really what it boils down to is that if you want to succeed with Serverless technology you need to think differently about how and why you build things, focus on delivering business value more than the underlying technology you are using.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Encouraging this Serverless mindset is key to building confidence.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;If you really want to encourage this mindset, it can't start at an individual level, it has to come from an organisational level.&lt;/p&gt;

&lt;p&gt;At Instil, this idea of adopting a Serverless mindset, or going Serverless-first as some may say started from the top. We have an engineering strategy which lays out a number of things that we want to focus on from an engineering perspective and in the “cloud” section this was the first point:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;In the last few years, Serverless has exploded and dramatically changed how applications are built - no longer do we need to worry about capacity planning, patching servers or wasting money on under utilised resources, we now have (virtually) infinite computing power at our disposal ready to scale up massively in a fraction of a second. We want to embrace Serverless but this requires a cultural change in how we approach building software. We all understand that code is a liability and in the Serverless world, our focus shifts to connecting events and services through configuration while reducing the amount of code we write. Right now, the Functionless (aka Serverfull) movement is also gaining popularity driven in part by AWS enabling application developers to ditch Lambda functions in favour of direct integrations between managed services.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;That’s a confidence booster right there, if leadership are encouraging a Serverless first approach, that should help teams take that first step.&lt;br&gt;
But its one thing saying that you want to take this approach, but a very different thing putting it into action.&lt;/p&gt;

&lt;p&gt;The Serverless mindset was definitely something that our team struggled with, at the very beginning we were treating Lambda Functions as just &lt;a href="https://dev.to/aws-builders/zero-to-serverless-car-insurance-part-1-cml"&gt;another way to get some compute in the cloud&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;As developers we focused too much on what Serverless meant for us, and I think when you take that narrow minded approach you can come up short on some good reasons for choosing Serverless:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Is it easier to deploy? Perhaps…&lt;/li&gt;
&lt;li&gt;Is it easier to test? Not if you try to do it locally.&lt;/li&gt;
&lt;li&gt;Is it easier to code? Only if you educate yourself first.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Nothing really stands out from that list.&lt;/p&gt;

&lt;p&gt;In fact I know that early on on the project there were people that had their doubts if Serverless was a good choice.&lt;br&gt;
The team needed to change their mindset and instead of solely focusing on the technology and how that made their lives as developers easier, instead look at the bigger picture, what does this mean for Instil and what does it mean for our customer.&lt;/p&gt;

&lt;p&gt;One of the first steps on this journey is enlightening developers that writing less code is better.&lt;/p&gt;

&lt;h2&gt;
  
  
  Enlightening developers that writing less code is better
&lt;/h2&gt;

&lt;p&gt;One of the key steps to instilling confidence in our Serverless team was to learn to write the code that really matters. In &lt;a href="https://dev.to/aws-builders/zero-to-serverless-car-insurance-part-2-3ba"&gt;Part 2&lt;/a&gt; we shared how our team came to this realisation.&lt;/p&gt;

&lt;p&gt;And when we think about how we encourage the Serverless mindset in our teams, we need to have some key people to drive that forward. The lead engineers and architects of the team need to have the confidence that Serverless is the right choice. You can't expect the rest of the team to succeed if the people making the decisions have their doubts.&lt;br&gt;
For me I certainly had my doubts, but that confidence came from educating myself. When I first joined instil Garth, our Director of Training shared a quote:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Being a software engineer is just agreeing to do homework for the rest of your life.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;So if you are a lead engineer and you want to encourage the Serverless mindset in your team, you first have to educate yourself and build up your own confidence.&lt;/p&gt;

&lt;p&gt;At Instil we strive for engineering excellence and that kind of culture attracts people that love to write code, but how do you convince a team of die hard programmers to write less code?&lt;/p&gt;

&lt;p&gt;As developers were told that “code is a liability”. But let's be honest with ourselves, do we really believe that?, or do we say to ourselves, &lt;em&gt;their&lt;/em&gt; code is a liability, but mine? mine is perfect!&lt;/p&gt;

&lt;p&gt;We wanted to encourage the engineers to really &lt;em&gt;own&lt;/em&gt; the problems they were trying to solve and to do that we needed to create a safe space for them to experiment, propose new ideas, get feedback from their peers and then be given the opportunity to actually implement the changes.&lt;/p&gt;

&lt;p&gt;Now I need to make a disclaimer, I’m a huge Apple fanboy, and secretly my dream job is to be an independent iOS developer living off my App Store profits. If you’re familiar with iOS development you'll know that the main languages you can use is Swift.&lt;/p&gt;

&lt;p&gt;For new Swift language features there is a process called &lt;a href="https://github.com/apple/swift-evolution/blob/main/proposal-templates/0000-swift-template.md" rel="noopener noreferrer"&gt;Swift Evolution&lt;/a&gt;, it’s quite a big template with a few headers that enables people to propose new features. Now I’m not saying that Apple invented this, I know that Kotlin has a similar process and I’m sure other languages do too, but remember Apple Fanboy, so history and facts are not important. But this is where I drew inspiration from.&lt;/p&gt;

&lt;p&gt;So we created a process called &lt;em&gt;Stroll Evolution&lt;/em&gt;, a colleague summarised it like this:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Stroll Evolution, in this sense, includes a proposed solution to achieve some goal (motivation), with information about the proposed design, what effects there would be, as well as alternatives considered. It’s always good when you can answer “but why didn’t you do it some other way?” At least even for yourself in the future.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;And with a simple template with the following headers:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Introduction&lt;/li&gt;
&lt;li&gt;Motivation&lt;/li&gt;
&lt;li&gt;Proposed solution&lt;/li&gt;
&lt;li&gt;Detailed design&lt;/li&gt;
&lt;li&gt;Effect on Web App&lt;/li&gt;
&lt;li&gt;Effect on Mobile App&lt;/li&gt;
&lt;li&gt;Alternatives Considered&lt;/li&gt;
&lt;li&gt;Decision&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;We created a way for engineers to own a problem and help find a solution. This process helped us to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Do our first DynamoDB data model, where we followed the steps laid out in Alex DeBrie’s DynamoDB book, came up with our access patterns and primary key design to help enable them.&lt;/li&gt;
&lt;li&gt;Plan out what our first step function would look like and why we wanted to use step functions in the first place.&lt;/li&gt;
&lt;li&gt;Decide how we process insurance policy documents and what the best AWS messaging service was for solving that problem.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;So not only did it help us solve project specific problems, it gave team members a way to educate themselves on the Serverless offerings of AWS. Let them see for themselves, that they can either spend weeks building a thing, or just connect services together and focus on actually delivering business value instead.&lt;/p&gt;

&lt;p&gt;I’m not saying that this exact process is going to work for your team, I’m just saying that there needs to be a way for engineers to own the problems they are trying to solve.&lt;/p&gt;

&lt;p&gt;On our team, anyone can come up with a Stroll Evolution, from Senior engineers to Apprentice software engineers.&lt;br&gt;
And with Serverless the engineers need to own more than they might do in other architectures. A Serverless engineer can't just write some code and chuck it over the wall for someone else to deploy, they need to understand what that looks like and own the end to end solution. And in my opinion that's a good thing.&lt;/p&gt;

&lt;p&gt;But if we want engineers to own the end to end solution, they also need to take ownership of deploying it to production.&lt;/p&gt;

&lt;h2&gt;
  
  
  Enabling production deployments with confidence
&lt;/h2&gt;

&lt;p&gt;There’s one final point I want to make around building confidence in your Serverless teams and that’s around production deployments, specifically around verifying that things are working as expected.&lt;/p&gt;

&lt;p&gt;We wanted everyone on the team to have the confidence to kick off a production deployment. But for a period of time this wasn't the case, deployments were left for senior engineers to kick off, this impacted our lead time when delivering new features and reduced our deployment frequency.&lt;/p&gt;

&lt;p&gt;No one actually came out and said “only this group of people can manage deployments”, it was just something that happened naturally because people were worried about breaking something. The reason behind this was there was a gap in our testing approach.&lt;/p&gt;

&lt;p&gt;Some of you may be familiar with the testing triangle. Where you might have E2E at the top, integration in the middle and unit at the bottom.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3pfbnv2vyjvhmxo0hvr6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3pfbnv2vyjvhmxo0hvr6.png" alt="The Testing Triangle" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;On Stroll however we were missing the bit in the middle, we had plenty of unit tests. In fact I would say we had too many unit tests. Also, because we encouraged developers to use things like direct service integrations and step functions, unit testing became less useful.&lt;/p&gt;

&lt;p&gt;We had some e2e tests that used the UI to drive the test but there was no integration tests.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;The reason being that with Serverless, integration testing is hard.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;You have two choices, try and run everything locally by simulating AWS on your developer machine. Or deploy to AWS and test in the cloud.&lt;/p&gt;

&lt;p&gt;But this was a gap in our verification process that we needed to plug. So (using a Stroll Evolution) we put together a plan to write some integration tests and from that plan we came up with a definition of what integration testing meant for the project:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;An integration test (on this project) is defined as one that tests integrations with AWS services, e.g. DynamoDB, S3, SQS. But does not test integrations with third parties, these should be mocked out instead.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The aim of these tests were to firstly verify that the workload is functioning correctly but also that you have the correct IAM permissions and that the necessary resources have deployed correctly.&lt;/p&gt;

&lt;p&gt;If you’re deploying a workload to AWS it is really important for everyone involved in the project to be confident that it is running correctly. We have found that writing tests for each step of the process has been immensely helpful in giving us this confidence and as a result production deployments are thankfully not something that are left to an individual but are instead the responsibility of the entire team.&lt;/p&gt;

&lt;p&gt;In a our next post a colleague will be sharing how we ensured our Step Function State Machines were executing correctly.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>serverless</category>
    </item>
    <item>
      <title>Zero to Serverless Car Insurance - Part 2</title>
      <dc:creator>Matthew Wilson</dc:creator>
      <pubDate>Tue, 11 Apr 2023 10:27:23 +0000</pubDate>
      <link>https://forem.com/aws-builders/zero-to-serverless-car-insurance-part-2-3ba</link>
      <guid>https://forem.com/aws-builders/zero-to-serverless-car-insurance-part-2-3ba</guid>
      <description>&lt;p&gt;Welcome to Part 2 of &lt;a href="https://dev.to/matthewwilson/series/22472"&gt;our series&lt;/a&gt; on building from Zero to Serverless Car Insurance. In &lt;a href="https://dev.to/aws-builders/zero-to-serverless-car-insurance-part-1-cml"&gt;Part 1&lt;/a&gt;, we introduced the platform and discussed how we built an end-to-end solution using Serverless technology on AWS.&lt;/p&gt;

&lt;p&gt;In this post, we'll be focusing on some key improvements we've made to the platform, particularly focusing on how writing less code is a good thing!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--kdclHGUI--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_800/https://instil.co/static/breaking-down-the-lambdalith-cd3cdc7641a3adf32e005902ac86615e.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--kdclHGUI--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_800/https://instil.co/static/breaking-down-the-lambdalith-cd3cdc7641a3adf32e005902ac86615e.gif" alt="An animated gif showing how the lambdalith was broken down" width="720" height="405"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;It was time to break down the Lambdalith! This had a number of key benefits for the project:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;We no longer had to worry about maintaining our own Apollo server and could instead hand that responsibility over to AWS. Security patches and updating for new features are no longer a concern for us.&lt;/li&gt;
&lt;li&gt;We could create smaller domain based services instead enabling developers to make changes with less fear of breaking other parts of the application.&lt;/li&gt;
&lt;li&gt;It becomes easier to test.&lt;/li&gt;
&lt;li&gt;It reduces the overall size of the Lambda function package, which reduces the duration of cold starts.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Before we dive deeper into this change we need to get one thing straight:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Serverless is not just about Lambda functions!&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;You could build an entire serverless application on AWS with no containers or functions at all. Our decision to adopt AppSync has enabled us to make use of more of the Serverless offerings of AWS, which leads to the next exciting phase of our project.&lt;/p&gt;

&lt;h2&gt;
  
  
  Going Lambda-less!
&lt;/h2&gt;

&lt;p&gt;There I was, working from home during a global pandemic. We were building the customer portal for Stroll, enabling customers to login and see their policy information, download documents and submit claims.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--KW0Poqrd--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://instil.co/static/31993611e9587f6e84324487047716aa/09262/stroll-customer-portal.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--KW0Poqrd--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://instil.co/static/31993611e9587f6e84324487047716aa/09262/stroll-customer-portal.png" alt="The Stroll customer portal" width="800" height="520"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The task was simple, execute a query to get policy data from DynamoDB. I am sure most of us have built some CRUD functionality similar to this.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Not to sound too over dramatic, but little did I know that my life was about to change forever.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;To understand this life changing moment we first need to understand how AppSync works.&lt;br&gt;
I am not going to take too much time discussing GraphQL concepts in this post, that information is &lt;a href="https://graphql.org/learn/"&gt;freely available online&lt;/a&gt;. But for context:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://graphql.org/"&gt;GraphQL&lt;/a&gt; is just a schema, there are many different implementations of a GraphQL server, &lt;a href="https://aws.amazon.com/appsync/product-details/"&gt;AppSync&lt;/a&gt; being one of them. I mentioned &lt;a href="https://www.apollographql.com/docs/apollo-server/"&gt;Apollo&lt;/a&gt; server in this series as well.&lt;/li&gt;
&lt;li&gt;The benefit of using GraphQL it is that you have a well defined representation of your data model that clients can query. And they only need to query the information they need.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;AppSync provides the GraphQL server, authentication and data source integrations. You provide your schema and resolvers for the fields within it.&lt;/p&gt;

&lt;p&gt;There are 2 core concepts with &lt;a href="https://aws.amazon.com/appsync/product-details/"&gt;AppSync&lt;/a&gt;:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Data Source - A persistent storage system, e.g. a DynamoDB Table or a trigger e.g. a Lambda function&lt;/li&gt;
&lt;li&gt;Resolver - Resolvers are comprised of request and response mapping templates. These templates map your GraphQL query to the appropriate request for your data source. For example if you wanted to query a DynamoDB table, your request mapping template would transform the GraphQL query into a DynamoDB query.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;So if we jump back to our updated diagram, each of these Lambda functions (the orange icons) are configured as an individual data source in AppSync.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--zgRW7nga--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://instil.co/static/9b9971734da0803751d0b2030624e488/0f98f/phase2.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--zgRW7nga--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://instil.co/static/9b9971734da0803751d0b2030624e488/0f98f/phase2.jpg" alt="Diagram showing AppSync and multiple Lambda DataSources" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Up until this point we had been using Lambda Data Sources exclusively for our AppSync API. Every time a GraphQL query was executed, AppSync was invoking Lambda functions to get the data it needed.&lt;/p&gt;

&lt;p&gt;But AppSync is so much more powerful than this! You can resolve data directly from various sources, including DynamoDB.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ebdFX9RD--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://instil.co/static/5f099bc5cec7fe2dfc78f734e2c2bd75/0f98f/phase2-with-ddb-datasources.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ebdFX9RD--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://instil.co/static/5f099bc5cec7fe2dfc78f734e2c2bd75/0f98f/phase2-with-ddb-datasources.jpg" alt="Updated diagram showing a Lambda DataSource being replaced with a DynamoDB DataSource" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I was able to build the various queries needed for the customer portal without having to invoke a single Lambda function or write any “code” to make it happen.&lt;/p&gt;

&lt;p&gt;Because we were now using two managed services (AppSync + DynamoDB) we were able to offload the gluing of these two services together into configuration rather than code. This is a good thing, instead of spending time writing “glue” code, we can instead write the code that matters to our customers, the code that is going to help them have some unique selling points in their marketplace.&lt;/p&gt;

&lt;p&gt;Not only did this approach get rid of some code for us (Remember code is a liability). It also had a nice added benefit of being incredibly fast!&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;No Lambda function, no cold start!&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;This was the life changing moment for me; Combining the two services together directly resulted in performance that I was unable to achieve through writing my own Lambda function. It was from this moment on that I started to “trust” the managed services more and really dig deep into the Serverless offerings of AWS.&lt;/p&gt;

&lt;p&gt;Like all technology decisions, we need to look at the downsides to this decision.&lt;/p&gt;

&lt;p&gt;The big one is you have to use &lt;a href="https://docs.aws.amazon.com/appsync/latest/devguide/resolver-mapping-template-reference-programming-guide.html"&gt;VTL templates&lt;/a&gt;. These are used to map the GraphQL queries to, in this example, a DynamoDB query:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--VLQ-lNjX--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://instil.co/static/21209e8aae7d610a5d12031b9ffbecdd/e8950/vtl-template.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--VLQ-lNjX--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://instil.co/static/21209e8aae7d610a5d12031b9ffbecdd/e8950/vtl-template.png" alt="An example vtl template" width="800" height="390"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;It’s a brilliant idea and because most of the managed services are using HTTP APIs, you can effectively integrate with any of them.&lt;/p&gt;

&lt;p&gt;For whatever reason someone at AWS decided that velocity templates were the way to go when we build up the requests and responses from these direct integrations.&lt;/p&gt;

&lt;p&gt;These are hard to unit test and have limited utility methods vs something like a TypeScript Lambda function.&lt;/p&gt;

&lt;p&gt;Thankfully AWS have recently released JavaScript resolvers, although still quite limited, they enable developers to instead write their resolver templates using JavaScript rather than VTL. A welcome improvement and we hope to adopt these for future use cases.&lt;/p&gt;

&lt;p&gt;In our next post we will look at how we helped the team grow in confidence when working with the Serverless architecture of Stroll.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>serverless</category>
      <category>lambda</category>
      <category>appsync</category>
    </item>
    <item>
      <title>Zero to Serverless Car Insurance - Part 1</title>
      <dc:creator>Matthew Wilson</dc:creator>
      <pubDate>Mon, 03 Apr 2023 14:26:54 +0000</pubDate>
      <link>https://forem.com/aws-builders/zero-to-serverless-car-insurance-part-1-cml</link>
      <guid>https://forem.com/aws-builders/zero-to-serverless-car-insurance-part-1-cml</guid>
      <description>&lt;p&gt;In early 2021, we embarked on an ambitious project to build a car insurance platform from scratch using Serverless technology on AWS. Over the past two years, our team has learned a great deal about what it takes to build a Serverless platform - what it means to be Serverless-first, what works well, what doesn't, and most importantly, how to fully embrace the Serverless offering on AWS.&lt;/p&gt;

&lt;p&gt;The official vision for Stroll is:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;A digitally led broking business, combining innovative technology with industry-leading expertise to deliver exceptional customer experiences. Along the way, Stroll is transforming how people buy insurance and manage their policy, starting with the car insurance segment with a clear vision of building into other areas and across new territories.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The innovative technology mentioned in this vision is a shiny new car insurance platform built with managed services on AWS.&lt;br&gt;
From the beginning we wanted to build it using Serverless technology.&lt;/p&gt;

&lt;p&gt;Serverless has grown to mean many things and can even mean something different depending on which cloud provider you are paying for. For our team and for the purpose of this article, Serverless is a way of building and running applications without having to manage the underlying infrastructure.&lt;br&gt;
Of course there are still servers running in a data centre somewhere but those servers are not our responsibility. With Serverless we as developers can focus on our core product instead of worrying about managing and operating servers or runtimes.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--PEzqHMtd--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://instil.co/static/59fd1a7260bb2c53ccf2b464164c7e88/00d43/shared-responsibility-model.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--PEzqHMtd--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://instil.co/static/59fd1a7260bb2c53ccf2b464164c7e88/00d43/shared-responsibility-model.png" alt="AWS Shared Responsibility Model" width="800" height="450"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;AWS Shared Responsibility Model&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Most cloud providers have this concept of a shared responsibility model which breaks down what your responsibility is as a customer of AWS but also what AWS will take responsibility for.&lt;br&gt;
What we want to do is move that dotted line up as much as possible and allow AWS to do the “undifferentiated heavy lifting” (the stuff that adds no value to the product) for us.&lt;/p&gt;

&lt;p&gt;The benefits of Serverless are clear:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;No infrastructure provisioning or maintenance&lt;/li&gt;
&lt;li&gt;Automatic scaling&lt;/li&gt;
&lt;li&gt;Pay for what you use&lt;/li&gt;
&lt;li&gt;Highly available and secure&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Through this series of blog posts, we're excited to share our team's Serverless journey with you and hope you learn something useful regardless of your favourite cloud provider.&lt;/p&gt;

&lt;h2&gt;
  
  
  Phase 1 - The Lambdalith
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Gyy2oVU3--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://instil.co/static/9694316982a22a28aecc3417367aecff/00d43/hard-working-instillers.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Gyy2oVU3--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://instil.co/static/9694316982a22a28aecc3417367aecff/00d43/hard-working-instillers.png" alt="Photo of some stroll team members hard at work" width="800" height="534"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;When I joined Instil 4 years ago one of the things I noticed right away was their focus on quality. Testing was at the core of every project you quickly buy in and appreciate the benefits of test driven development and automated testing. This project was no different. I can say with the utmost confidence that my colleagues had good unit testing in from the very start.&lt;br&gt;
But there was one mindset that we started with that needed to change and that was the ability to build, run and test the backend locally.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;It would be madness to not be able to run the entire stack on your machine right?!&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;However when you are building Serverless applications you really need to switch your mindset from running locally to running in the cloud. This will have a profound impact on not only how you verify that your application is working correctly but also how you architect the solution.&lt;br&gt;
This project started off treating AWS Lambda as just another way to get some compute in the cloud. Yes we were using a Lambda function, but really it could have easily been a docker container or an EC2 instance instead.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--c0nlK5OF--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://instil.co/static/ad03eaae0d3139370a04e7a242d966e0/a2510/phase1.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--c0nlK5OF--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://instil.co/static/ad03eaae0d3139370a04e7a242d966e0/a2510/phase1.jpg" alt="Architecture diagram visualising an appsync api with a single Lambda datasouce" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The team had built what you could call a Lambdalith. A single Lambda running Apollo server and resolving all of the GraphQL mutations and queries within that single Lambda. Now why do such a thing?&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The entire backend can be ran locally&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Click the green play button in your IDE, or run a terminal command and off you go. You can have your client talking to the backend in no time.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The solution is more portable.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;There is less chance of “vendor lock-in”, meaning if AWS suddenly gets too expensive you can move to another provider with less effort.&lt;/p&gt;




&lt;p&gt;And yes those are valuable things, but what are the downsides?&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Cold Starts&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Spend anytime investigating AWS Lambda and you will most likely encounter the term “cold start”. Your functions will shutdown if they haven’t been invoked in a while so there is a period of time when they need to initialise the underlying runtime, load your function code and then execute it.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--eY6E9ZcT--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://instil.co/static/ff56525e879db36439c4f53237eae448/4ef49/lambda-execution-lifecycle.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--eY6E9ZcT--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://instil.co/static/ff56525e879db36439c4f53237eae448/4ef49/lambda-execution-lifecycle.png" alt="Diagram of the AWS Lambda Function execution lifecycle" width="800" height="178"&gt;&lt;/a&gt;&lt;br&gt;
Source: &lt;a href="https://aws.amazon.com/blogs/compute/operating-lambda-performance-optimization-part-1/"&gt;https://aws.amazon.com/blogs/compute/operating-lambda-performance-optimization-part-1/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In the early days of Lambda this cold start introduced so much latency that it ruled the service out for many teams. However because it’s a managed service AWS have been able to optimise this process and you essentially get those optimisations for “free” without needing to make any code changes.&lt;br&gt;
Because of our decision to create a Lambdalith, we were doing some heavy lifting of our own after already incurring a cold start penalty. Namely the initialisation of Apollo server. The desire to test and run locally caused us to incur a performance hit in production!&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;The importance of testing locally forced the team to make an important architectural decision of a Lambdalith.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;This not only meant that we were taking a performance and cost hit but we also were not making full use of other AWS managed services like &lt;a href="https://aws.amazon.com/appsync/product-details/"&gt;AppSync&lt;/a&gt; (AWS' managed GraphQL Service).&lt;/p&gt;

&lt;p&gt;So how do we make testing in the cloud easier for developers?&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Ensure that every developer has their own AWS account.&lt;/strong&gt; This is crucial, having a sandboxed environment makes it easy for developers to not only test their code but also test their deployments as well. You could have one shared developer account and prefix resources with developer names or something like that but this is just asking for some kind of conflict or accidental deletion.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Make the deployment process fast.&lt;/strong&gt; &lt;a href="https://instil.co/blog/improving-cdk-development-cycle/"&gt;We make heavy use of the hotswap and watch functionality in CDK&lt;/a&gt;, enabling developers to hot-reload their changes automatically.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;If you need an even shorter feedback loop, consider invoking your Lambda code locally.&lt;/strong&gt; It’s just a function after all and AWS services are accessed using HTTP APIs. You just need to make sure you have the correct permissions and resources deployed.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;I like to think of this as phase one of the project. It worked, it was built using Serverless technologies but it had much room for improvement. The team realised that this approach was not going to work going forward and decided to replace Apollo server with AppSync, a good decision as we no longer had to worry about maintaining our own Apollo server and could instead hand that responsibility over to AWS. This change also enabled us to break down the Lambdalith into smaller domain based services instead, which we will cover in our next post.&lt;/p&gt;

</description>
      <category>serverless</category>
      <category>aws</category>
      <category>lambda</category>
      <category>appsync</category>
    </item>
    <item>
      <title>Lambda Container Images</title>
      <dc:creator>Matthew Wilson</dc:creator>
      <pubDate>Wed, 06 Jul 2022 20:37:00 +0000</pubDate>
      <link>https://forem.com/matthewwilson/lambda-container-images-54dn</link>
      <guid>https://forem.com/matthewwilson/lambda-container-images-54dn</guid>
      <description>&lt;p&gt;Recently AWS released a new way for developers to package and deploy their Lambda functions as "Container Images". This enables us to build a Lambda with a docker image of your own creation. The benefit of this is we can now easily include dependencies along with our code in a way that is more familiar to developers. If you have used docker containers before, then this is much simpler to get started with than the other option - Lambda layers.&lt;/p&gt;

&lt;p&gt;AWS have provided developers with a number of base images for each of the current Lambda runtimes (Python, Node.js, Java, .NET, Go, Ruby). It is easy for a developer to then use one of these images as a base, and build their own image on top.&lt;/p&gt;

&lt;p&gt;Of course there are many sensible use cases for container images. Perhaps you want to include some machine learning dependencies? Maybe you would love to have FFMPEG in your lambda for your video processing needs? Or you want to nuke your entire AWS account to avoid a hefty bill? &lt;/p&gt;

&lt;p&gt;You heard me, in this blog article, we are going to build a container image with &lt;a href="https://github.com/rebuy-de/aws-nuke" rel="noopener noreferrer"&gt;aws-nuke&lt;/a&gt; installed! This will delete everything in an AWS account (excluding our fancy new container image lambda). Nuke is built using Go but we are going to get started with the &lt;a href="https://hub.docker.com/r/amazon/aws-lambda-nodejs" rel="noopener noreferrer"&gt;node.js base image&lt;/a&gt; and build our own Lambda using JavaScript. This library isn't available on NPM, so there is no easy way to pull it into our Lambda function, but with container images we see that it provides a way for developers to mix and match different tools to build a scalable solution to the problem they are trying to solve.&lt;/p&gt;

&lt;p&gt;To get started with our new container image, we can create a Dockerfile like so&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight docker"&gt;&lt;code&gt;&lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="s"&gt; public.ecr.aws/lambda/nodejs:12&lt;/span&gt;
&lt;span class="k"&gt;LABEL&lt;/span&gt;&lt;span class="s"&gt; maintainer="Instil &amp;lt;team@instil.co&amp;gt;" &lt;/span&gt;

&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; ./lambda/nuke.js ./lambda/package*.json ./&lt;/span&gt;
&lt;span class="k"&gt;RUN &lt;/span&gt;npm &lt;span class="nb"&gt;install&lt;/span&gt;
&lt;span class="k"&gt;CMD&lt;/span&gt;&lt;span class="s"&gt; [ "nuke.lambdaHandler" ]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As you can see we are building from the &lt;code&gt;lambda/nodejs:12&lt;/code&gt; base image, and copying over our Lambda function code. &lt;br&gt;
Notice the last line of our Dockerfile, &lt;code&gt;CMD [ "nuke.lambdaHandler" ]&lt;/code&gt;. Because we are using one of the base images, it comes pre-installed with the &lt;a href="https://docs.aws.amazon.com/lambda/latest/dg/runtimes-images.html#runtimes-api-client" rel="noopener noreferrer"&gt;Lambda Runtime Interface Client&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;The runtime interface client in your container image manages the interaction between Lambda and your function code. The Runtime API, along with the Extensions API, defines a simple HTTP interface for runtimes to receive invocation events from Lambda and respond with success or failure indications.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Therefore &lt;code&gt;CMD [ "nuke.lambdaHandler" ]&lt;/code&gt; lets the interface client know what handler function to call when it receives an invocation event.&lt;/p&gt;

&lt;p&gt;Before we add the nuclear option, lets create the skeleton for our handler function:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="nx"&gt;exports&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;lambdaHandler&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;async &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;event&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;response&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;statusCode&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;200&lt;/span&gt; &lt;span class="p"&gt;};&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nx"&gt;response&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="p"&gt;};&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For now it simply returns a 200 response.&lt;/p&gt;

&lt;p&gt;Not only does our container image include the Lambda Runtime Interface Client, it also includes the &lt;a href="https://docs.aws.amazon.com/lambda/latest/dg/runtimes-images.html#runtimes-test-emulator" rel="noopener noreferrer"&gt;Runtime Interface Emulator&lt;/a&gt;. This allows you to test your function locally, which in my opinion, is one of the killer reasons to adopt container images for your project. &lt;/p&gt;

&lt;p&gt;Given we have a project structure like this:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;.
├── Dockerfile
├── docker-compose.yml
└── lambda
    ├── nuke.js
    └── package.json
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Then to build our container image, we simply use the docker cli:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker build -f ./Dockerfile -t instil-nuke .
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;And to run it locally:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker run -p 9000:8080 instil-nuke
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Then to test our function locally, we just need to hit our lambda with an http request. In this example we are posting an empty JSON body:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;➜  curl -XPOST "http://localhost:9000/2015-03-31/functions/function/invocations" -d '{}'
{"statusCode":200}%             
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;The url seems strange, but the Runtime Interface Emulator is simply providing an endpoint that matches the &lt;a href="https://docs.aws.amazon.com/lambda/latest/dg/API_Invoke.html" rel="noopener noreferrer"&gt;Invoke endpoint of the Lambda API&lt;/a&gt;. The only difference between this local URL and the real API URL is that our function name is hardcoded as &lt;code&gt;function&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Being able to run our function locally like this greatly reduces the feedback loop when developing your Lambda. There are other options out there for running Lambdas locally; for example &lt;a href="https://docs.aws.amazon.com/a-application-model/latest/developerguide/sam-cli-command-reference-sam-local-start-api.html" rel="noopener noreferrer"&gt;sam local&lt;/a&gt;, but the container image approach gives you a local test environment that is much closer to how it will be ran on AWS. &lt;/p&gt;

&lt;p&gt;Now that we have our project structure in place, lets take a look at adding aws-nuke to our container image.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight docker"&gt;&lt;code&gt;&lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="s"&gt; public.ecr.aws/lambda/nodejs:12&lt;/span&gt;
&lt;span class="k"&gt;LABEL&lt;/span&gt;&lt;span class="s"&gt; maintainer="Instil &amp;lt;team@instil.co&amp;gt;" &lt;/span&gt;

&lt;span class="k"&gt;RUN &lt;/span&gt;yum &lt;span class="nt"&gt;-y&lt;/span&gt; update
&lt;span class="k"&gt;RUN &lt;/span&gt;yum &lt;span class="nt"&gt;-y&lt;/span&gt; &lt;span class="nb"&gt;install tar gzip&lt;/span&gt;

&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; ./resources/aws-nuke-v2.15.0.rc.3-linux-amd64.tar.gz ./resources/nuke-config.yml ./&lt;/span&gt;
&lt;span class="k"&gt;RUN &lt;/span&gt;&lt;span class="nb"&gt;tar&lt;/span&gt; &lt;span class="nt"&gt;-xzf&lt;/span&gt; ./aws-nuke-v2.15.0.rc.3-linux-amd64.tar.gz &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nb"&gt;mv &lt;/span&gt;aws-nuke-v2.15.0.rc.3-linux-amd64 aws-nuke

&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; ./lambda/nuke.js ./lambda/package*.json ./&lt;/span&gt;
&lt;span class="k"&gt;RUN &lt;/span&gt;npm &lt;span class="nb"&gt;install&lt;/span&gt;
&lt;span class="k"&gt;CMD&lt;/span&gt;&lt;span class="s"&gt; [ "nuke.lambdaHandler" ]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Adding dependencies is just how you would expect if you have used docker before. In the above example we are adding dependencies using yum and copying nuke onto our image.&lt;/p&gt;

&lt;p&gt;Then all we need to do it update our function to execute aws-nuke.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;execSync&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;require&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;child_process&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;run&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;command&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;command&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;result&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;execSync&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;command&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="na"&gt;stdio&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;inherit&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;});&lt;/span&gt;
    &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;result&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;result&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;toString&lt;/span&gt;&lt;span class="p"&gt;());&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;nuke&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Nuking this AWS account...&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;accessKey&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;process&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;env&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;AWS_ACCESS_KEY_ID&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;secretAccessKey&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;process&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;env&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;AWS_SECRET_ACCESS_KEY&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;sessionToken&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;process&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;env&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;AWS_SESSION_TOKEN&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="nf"&gt;run&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;`./aws-nuke -c nuke-config.yml --access-key-id &lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;accessKey&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt; --secret-access-key &lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;secretAccessKey&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt; --session-token &lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;sessionToken&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt; --force --force-sleep 3`&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Your AWS account has been nuked, you can sleep peacefully knowing that you will no longer get an unexpected bill.&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="nx"&gt;exports&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;lambdaHandler&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;async &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;event&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nf"&gt;nuke&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;response&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;statusCode&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;200&lt;/span&gt; &lt;span class="p"&gt;};&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nx"&gt;response&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="p"&gt;};&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We can use &lt;code&gt;execSync&lt;/code&gt; to execute a command in our running lambda, it's easy to see how simple it is to utilise external dependencies in our Lambda environment with this new container image option. Notice that we are pulling AWS access tokens from environment variables so that nuke can use them, this is &lt;a href="https://docs.aws.amazon.com/lambda/latest/dg/configuration-envvars.html" rel="noopener noreferrer"&gt;default behaviour&lt;/a&gt; for Lambda functions and they are the access keys obtained from the function's execution role.&lt;/p&gt;

&lt;p&gt;With our updated container image ready to nuke our account, all we need to do is deploy it. For this we need to create an ECR repository and push our image to it:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Replace [AWS_ACCOUNT_NUMBER] with your own AWS account number
aws ecr create-repository --repository-name instil-nuke --image-scanning-configuration scanOnPush=true
docker tag instil-nuke:latest [AWS_ACCOUNT_NUMBER].dkr.ecr.eu-west-1.amazonaws.com/instil-nuke:latest
aws ecr get-login-password | docker login --username AWS --password-stdin [AWS_ACCOUNT_NUMBER].dkr.ecr.eu-west-1.amazonaws.com
docker push [AWS_ACCOUNT_NUMBER].dkr.ecr.eu-west-1.amazonaws.com/instil-nuke:latest
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now that our container image lives in AWS, we just need to create our Lambda function. In the Create function page of the AWS management console, you will notice there is a new option to use a Container image as your starting point:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Finstil.co%2Fstatic%2F454ef5808015859413324a574ce591d7%2F00d43%2Fcreate-function-1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Finstil.co%2Fstatic%2F454ef5808015859413324a574ce591d7%2F00d43%2Fcreate-function-1.png" alt="Screenshot of the new Container image option on the create function page"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Choosing this option then enables you to pick your container image; click the Browse images button to select your freshly uploaded image. You should be left with something like this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Finstil.co%2Fstatic%2F9908c7c6ebebd3670c89defa0fb5a498%2F00d43%2Fcreate-function-2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Finstil.co%2Fstatic%2F9908c7c6ebebd3670c89defa0fb5a498%2F00d43%2Fcreate-function-2.png" alt="Screenshot of the Container image URI input, populated with the uploaded container image ECR URI"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And that's it! All that's left to do is trigger our Lambda function. For our example we could detonate the nuke once we get a billing alarm over a certain threshold but for the sake of keeping this article focused on container images, lets just trigger it with a test event for now and inspect the output. We will publish another article in the future explaining how to hook this up to a billing alarm.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Finstil.co%2Fstatic%2F643ff279e22dea476bc6b1db0fc72c94%2F00d43%2Ffunction-execution-result.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Finstil.co%2Fstatic%2F643ff279e22dea476bc6b1db0fc72c94%2F00d43%2Ffunction-execution-result.png" alt="Screenshot of the result of our test execution"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Notice the very disappointing output of our detonation:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;The above resources would be deleted with the supplied configuration. Provide --no-dry-run to actually destroy resources.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;You didn't think I was actually going to &lt;em&gt;nuke my AWS account&lt;/em&gt; did you? 😊&lt;/p&gt;

</description>
      <category>aws</category>
      <category>serverless</category>
      <category>lambda</category>
    </item>
    <item>
      <title>AppSync and DynamoDB Lessons Learned</title>
      <dc:creator>Matthew Wilson</dc:creator>
      <pubDate>Wed, 06 Jul 2022 20:22:00 +0000</pubDate>
      <link>https://forem.com/matthewwilson/appsync-and-dynamodb-lessons-learned-2d1a</link>
      <guid>https://forem.com/matthewwilson/appsync-and-dynamodb-lessons-learned-2d1a</guid>
      <description>&lt;p&gt;When building a serverless first platform it’s very hard to ignore the compelling feature set offered by &lt;a href="https://aws.amazon.com/appsync/" rel="noopener noreferrer"&gt;AWS AppSync&lt;/a&gt; and &lt;a href="https://aws.amazon.com/dynamodb/" rel="noopener noreferrer"&gt;DynamoDB&lt;/a&gt;. Each service has their own list of great features, but the magic lies when you combine these two services together. I have often said that the sweet spot when working with AWS is when you start to play AWS lego; use the building blocks provided by AWS to “click” the services together using configuration rather than code. &lt;/p&gt;

&lt;p&gt;We are using these two services to help build a fully serverless insurance platform and in this article I’m going to share some lessons we have learnt along the way.&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Determine if you want to use single or multi table design with DynamoDB
&lt;/h3&gt;

&lt;p&gt;One thing the serverless community can agree on is that &lt;a href="https://www.alexdebrie.com/" rel="noopener noreferrer"&gt;Alex DeBrie’s DynamoDB book&lt;/a&gt; is a must read if you are starting your journey with DynamoDB. DeBrie lays out the best practices for working with DynamoDB, single table design and modelling your tables. He also gives advice on when it may &lt;em&gt;not&lt;/em&gt; be appropriate to use single table design, &lt;a href="https://www.alexdebrie.com/posts/dynamodb-single-table/#graphql--single-table-design" rel="noopener noreferrer"&gt;including when you are working with GraphQL&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;This is a decision that you will want to make early on in your project. At this stage it's almost universally accepted that single table design is the right approach for most use cases. &lt;/p&gt;

&lt;p&gt;For our project we decided on a mixed approach. Where appropriate, we will use single table design to group closely related items together. Closely related items could be defined as items that are likely to be queried together. Determining your access patterns before working on a feature will help you to better understand the relationship between your entities and the types of queries you are going to use.&lt;/p&gt;

&lt;p&gt;For loosely related items that sometimes are queried together but also separately, we typically store those entities in separate tables. We rely on AppSync to resolve our entities and aggregate the data into a single response for the client.&lt;/p&gt;

&lt;p&gt;We would also use separate tables if the entity stored in a table is versioned. Rather than storing multiple different entities in a table, we would instead store multiple versions of an entity. Keeping the versioned entity in its own table helps to reduce complexity of that table and makes it easier to work with when resolving data from the table.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Model your tables following single table design best practices
&lt;/h3&gt;

&lt;p&gt;This point applies regardless if you are using a single or multiple tables!&lt;/p&gt;

&lt;p&gt;Part of single table design is decoupling your partition key and sort key from the actual record you are trying to store.&lt;/p&gt;

&lt;p&gt;Let's take a record that has an ID, it would be tempting to make this field your partition key and perhaps Date Created as your sort key.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Finstil.co%2Fstatic%2F9d581e53be25f921847a92d8c88c52a6%2Fa1792%2Fbasic-table.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Finstil.co%2Fstatic%2F9d581e53be25f921847a92d8c88c52a6%2Fa1792%2Fbasic-table.png" alt="Table showing basic ID and Date Created columns"&gt;&lt;/a&gt;&lt;/p&gt;



&lt;p&gt;This approach makes your table a lot less flexible for the future. Perhaps you will realise that storing multiple entities in this table is a good idea, by having a partition key named ID it makes it harder to put something else in that field, for example a username.&lt;/p&gt;

&lt;p&gt;Instead store the entity type and identifier in a field called partitionKey and sortKey. This is commonplace in single table design, but is still a good practice to follow when using multiple tables.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Finstil.co%2Fstatic%2F62eed95aefc81047f7d2853aa52fa07c%2F227ba%2Fadvanced-table.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Finstil.co%2Fstatic%2F62eed95aefc81047f7d2853aa52fa07c%2F227ba%2Fadvanced-table.png" alt="Table showing more advanced ID and Date Created columns with dedicated partition and sort keys"&gt;&lt;/a&gt;&lt;/p&gt;



&lt;p&gt;Some developers prefer to use PK and SK as the names for these fields. At Instil, we tend to avoid acronyms where possible and prefer to use clear and unambiguous naming conventions. Therefore we name these fields “partitionKey” and “sortKey”. At scale this might be something to consider when you want to shorten your attribute names to save on storage but that isn’t an issue for our particular project.&lt;/p&gt;

&lt;p&gt;One side point is to always include a sort key field even if you don’t need it right away. You can't add a sort key after a table has been created!&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Use AppSync DynamoDB resolvers
&lt;/h3&gt;

&lt;p&gt;A game changing moment for our platform was when we switched from using Lambda resolvers to DynamoDB resolvers. The ability to connect your API directly to your database delivers performance that a Lambda cannot match. It also enables developers to quickly add new features to your API whilst adding very little "code".&lt;/p&gt;

&lt;p&gt;Multi-table design makes this approach much simpler, it removes the need for complex&lt;a href="https://docs.aws.amazon.com/appsync/latest/devguide/resolver-mapping-template-reference-overview.html" rel="noopener noreferrer"&gt; VTL resolver templates &lt;/a&gt; that transform data returned from a table following single table design. AppSync will effortlessly query your tables in parallel and aggregate the responses into the format that matches your GraphQL schema.&lt;/p&gt;

&lt;h3&gt;
  
  
  4. Be wary of the N+1 Problem
&lt;/h3&gt;

&lt;p&gt;The N+1 problem isn’t specific to AppSync, it can occur in any GraphQL system when one top level query produces N items whose type contains a field that must also be resolved.&lt;/p&gt;

&lt;p&gt;For example, in the query below, listOrders returns N items, and each item has a reviews field that must be resolved in turn.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;query getOrders {
  listOrders { // 1 top level query
    items { // N items
      id
      reviews { // N additional queries
        author
      }
    }
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;AWS has some advice on how to work around this - &lt;a href="https://aws.amazon.com/blogs/mobile/introducing-configurable-batching-size-for-aws-appsync-lambda-resolvers/" rel="noopener noreferrer"&gt;https://aws.amazon.com/blogs/mobile/introducing-configurable-batching-size-for-aws-appsync-lambda-resolvers/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;But I think this should instead be factored into your GraphQL schema and DynamoDB table design to eliminate the N+1 problem where possible. Perhaps there is an argument here to store the items and reviews in a single table to aid with querying that data, but other entities can still live in different tables.&lt;/p&gt;




&lt;p&gt;Hopefully you have found these points useful as you continue your development journey with AppSync and DynamoDB. We have found that the combination of these two services has provided a flexible architecture that we have been able to extend and refactor over time as we deliver new features. &lt;/p&gt;

&lt;p&gt;We develop custom cloud and mobile software products for some of the world's leading brands. If you would like to learn more &lt;a href="https://instil.co/software-enquiry/" rel="noopener noreferrer"&gt;get in touch&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>dynamodb</category>
      <category>aws</category>
      <category>appsync</category>
    </item>
    <item>
      <title>Improving CDK development cycle</title>
      <dc:creator>Matthew Wilson</dc:creator>
      <pubDate>Wed, 16 Mar 2022 22:50:00 +0000</pubDate>
      <link>https://forem.com/matthewwilson/improving-cdk-development-cycle-4d6h</link>
      <guid>https://forem.com/matthewwilson/improving-cdk-development-cycle-4d6h</guid>
      <description>&lt;p&gt;Here at Instil we have been working on a fully serverless car insurance platform that utilises the &lt;a href="https://docs.aws.amazon.com/cdk/v2/guide/home.html"&gt;AWS CDK&lt;/a&gt; for deployments. One of the lessons we learnt early on in this project is that there is no substitution for deploying and testing our code in AWS; each of our developers have their own AWS accounts that they regularly deploy code to. However with CDK sometimes these deployments can take a while...&lt;/p&gt;

&lt;p&gt;As an example, we have some &lt;a href="https://docs.aws.amazon.com/lambda/latest/dg/configuration-versions.html"&gt;versioned Lambda functions&lt;/a&gt; that run pre-traffic test functions with &lt;a href="https://docs.aws.amazon.com/cdk/api/v1/docs/aws-codedeploy-readme.html"&gt;CodeDeploy&lt;/a&gt; to verify that our deployed code is configured and behaving correctly. Typically these can take 3-4 mins to deploy, run tests, and switch traffic when using the &lt;code&gt;cdk deploy [stackname]&lt;/code&gt; command.&lt;/p&gt;

&lt;p&gt;While this functionality is extremely useful, those 3-4 mins spent waiting on deployments add up, thankfully CDK has a few ways of reducing this feedback loop that we should all be aware of. &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;The &lt;a href="https://github.com/aws/aws-cdk/tree/master/packages/aws-cdk"&gt;AWS CDK Toolkit (aka the CDK CLI) page on GitHub &lt;/a&gt;covers all of the CLI commands in detail and is worth keeping up-to date with for any future improvements!&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  What is hotswapping?
&lt;/h2&gt;

&lt;p&gt;You can pass the &lt;code&gt;--hotswap&lt;/code&gt; flag to the &lt;code&gt;deploy&lt;/code&gt; command:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;$ cdk deploy --hotswap [StackNames]&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;If you have only changed the code of a Lambda function, and nothing else in CDK this will attempt a faster deployment by skipping CloudFormation, and updating the affected resources directly. If something cannot be hot swapped then CDK will fall back and perform a normal deployment.&lt;/p&gt;

&lt;p&gt;Currently hotswapping is supported in the following scenarios:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Code asset (including Docker image and inline code) and tag changes of AWS Lambda functions.&lt;/li&gt;
&lt;li&gt;AWS Lambda Versions and Aliases changes.&lt;/li&gt;
&lt;li&gt;Definition changes of AWS Step Functions State Machines.&lt;/li&gt;
&lt;li&gt;Container asset changes of AWS ECS Services.&lt;/li&gt;
&lt;li&gt;Website asset changes of AWS S3 Bucket Deployments.&lt;/li&gt;
&lt;li&gt;Source and Environment changes of AWS CodeBuild Projects.&lt;/li&gt;
&lt;li&gt;VTL mapping template changes for AppSync Resolvers and Functions.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This list could update so be sure to check out &lt;a href="https://github.com/aws/aws-cdk/tree/master/packages/aws-cdk#hotswap-deployments-for-faster-development"&gt;this page&lt;/a&gt; for the most up-to date list.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;The hotswap command deliberately introduces drift in CloudFormation stacks in order to speed up deployments. For this reason, only use it for development purposes. &lt;strong&gt;Never use this flag for your production deployments&lt;/strong&gt;!&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  How does CDK know when to hotswap?
&lt;/h2&gt;

&lt;p&gt;Behind the scenes CDK uses another useful CLI command: &lt;a href="https://github.com/aws/aws-cdk/tree/master/packages/aws-cdk#cdk-diff"&gt;cdk diff&lt;/a&gt;.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;The diff command computes differences between the infrastructure specified in the current state of the CDK app and the currently deployed application.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;If you run &lt;code&gt;cdk diff&lt;/code&gt; after making only code changes you will see output that looks like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Stack MyStack
Resources
[~] AWS::Lambda::Function
 ├─ [~] Code
 │   └─ [~] .S3Key:
 │       ├─ [-] 113f08cf527f79c4a1b851dce481b9567a4e7c6dd5a9e0f47b35692837df05ac.zip
 │       └─ [+] 21127a377c84d613789121f2be5ee85c6e285d40d6fdf0db50467ec44998faa0.zip
 └─ [~] Metadata
     └─ [~] .aws:asset:path:
         ├─ [-] asset.113f08cf527f79c4a1b851dce481b9567a4e7c6dd5a9e0f47b35692837df05ac
         └─ [+] asset.21127a377c84d613789121f2be5ee85c6e285d40d6fdf0db50467ec44998faa0
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Notice the changes only refer to an Asset and nothing else in the stack. This is considered “hot-swappable” and will be hot swapped successfully by using the --hotswap flag.&lt;/p&gt;

&lt;p&gt;Running &lt;code&gt;cdk diff&lt;/code&gt; when there are non-code changes would output something like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Resources
[~] AWS::Lambda::Function
 ├─ [~] Code
 │   └─ [~] .S3Key:
 │       ├─ [-] 113f08cf527f79c4a1b851dce481b9567a4e7c6dd5a9e0f47b35692837df05ac.zip
 │       └─ [+] 21127a377c84d613789121f2be5ee85c6e285d40d6fdf0db50467ec44998faa0.zip
 ├─ [~] Environment
 │   └─ [~] .Variables:
 │       └─ [+] Added: .MATTHEW_WAS_HERE
 └─ [~] Metadata
     └─ [~] .aws:asset:path:
         ├─ [-] asset.113f08cf527f79c4a1b851dce481b9567a4e7c6dd5a9e0f47b35692837df05ac
         └─ [+] asset.21127a377c84d613789121f2be5ee85c6e285d40d6fdf0db50467ec44998faa0
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Notice the changes includes an environment variable addition as well as code changes. This change could not be hot swapped and would fall back to a normal deployment.&lt;/p&gt;

&lt;h2&gt;
  
  
  Watching for changes
&lt;/h2&gt;

&lt;p&gt;CDK also provides a nice command called &lt;code&gt;cdk watch&lt;/code&gt;, this uses the hotswap functionality and continuously monitors the files of the project, and triggers a deployment whenever it detects any changes.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/aws/aws-cdk/tree/master/packages/aws-cdk#cdk-watch"&gt;https://github.com/aws/aws-cdk/tree/master/packages/aws-cdk#cdk-watch&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;A nice feature of this command is that it will also tail the CloudWatch logs after deploying your code! Hopefully reducing your feedback loop even more.&lt;/p&gt;

&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;p&gt;In our testing using &lt;code&gt;--hotswap&lt;/code&gt; for our versioned lambdas shaved ~4 mins off our deployment times! &lt;/p&gt;

&lt;p&gt;To ensure that hotswaps function correctly you want to adhere to the CDK principle of keeping your stacks &lt;em&gt;deterministic&lt;/em&gt;, for example avoid dynamically generating properties of your constructs (e.g. using a timestamp, or network lookups). &lt;/p&gt;

&lt;p&gt;If you or your team want to learn more about CDK, be sure to check out our &lt;a href="https://instil.co/courses/aws-serverless-typescript/"&gt;TypeScript for AWS Serverless Course&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>cdk</category>
      <category>aws</category>
    </item>
    <item>
      <title>CDK Lessons Learned</title>
      <dc:creator>Matthew Wilson</dc:creator>
      <pubDate>Wed, 16 Mar 2022 22:48:26 +0000</pubDate>
      <link>https://forem.com/matthewwilson/cdk-lessons-learned-3gae</link>
      <guid>https://forem.com/matthewwilson/cdk-lessons-learned-3gae</guid>
      <description>&lt;p&gt;The &lt;a href="https://aws.amazon.com/cdk/"&gt;AWS Cloud Development Kit (CDK)&lt;/a&gt; allows you to define your AWS resources using the &lt;a href="https://docs.aws.amazon.com/cdk/v2/guide/work-with-cdk-typescript.html"&gt;programming languages you know and love&lt;/a&gt;. This concept piqued the interest of many of us here at Instil; when someone offers us the ability to use Typescript instead of YAML we’re sold! &lt;/p&gt;

&lt;p&gt;I have been using CDK for the past 3 years for container based and serverless projects, and what I think is CDK’s greatest strength are the guard rails it provides to the developer: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Like any good API it is self-documenting, the declarative style helps to guide you on what resources are compatible with each other without the need to know about every feature in every AWS service. &lt;/li&gt;
&lt;li&gt;The built-in helpers for generating IAM roles give you safe defaults to ensure you are following security best practices. &lt;/li&gt;
&lt;li&gt;The high level (level 3) constructs  as well as open source versions (like &lt;a href="https://cdkpatterns.com"&gt;CDK patterns&lt;/a&gt; and &lt;a href="https://constructs.dev"&gt;Construct Hub&lt;/a&gt;) accelerate your application development by providing patterns that ensure your architecture is scalable, cost effective and secure. &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Despite all of CDK’s strengths it’s important to know the weaknesses of the framework and protect your team from any pitfalls they could encounter. Here are our top 5 lessons we have learnt on our CDK journey. &lt;/p&gt;

&lt;h3&gt;
  
  
  1. Separate your stateless and stateful resources into different stacks
&lt;/h3&gt;

&lt;p&gt;Keeping your stateful resources, (e.g. DynamoDB tables, RDS instances or Cognito User Pools) in separate stacks to your stateless resources, (e.g. Lambda functions or ECS Services) means that you will be able to delete and re-create your stateless stacks without losing any data during the process. This is a lesson we learnt early on our CDK journey and has proven a useful pattern across multiple projects.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Turn on &lt;a href="https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/using-cfn-protect-stacks.html"&gt;termination protection&lt;/a&gt; for your production environments
&lt;/h3&gt;

&lt;p&gt;It’s very easy for a developer to log in to the wrong AWS account and accidentally delete a CloudFormation stack (don’t ask me how I know this!). Turning on termination protection is an extra layer of protection that helps avoid those pesky developers taking down an environment by mistake 🙈. Using CDK makes it easy to include conditional logic to only enable termination protection in production accounts, reducing overhead on developers day to day.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://docs.aws.amazon.com/cdk/api/v1/docs/@aws-cdk_core.Stack.html#terminationprotection"&gt;https://docs.aws.amazon.com/cdk/api/v1/docs/@aws-cdk_core.Stack.html#terminationprotection&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Note: I used the term "extra layer" of protection. Termination protection shouldn't be used as an excuse for poorly managed IAM roles with more privileges than required!&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Refactoring isn't as easy as you think
&lt;/h3&gt;

&lt;p&gt;This is a lesson that has taken us longer to learn. We love clean code at Instil, refactoring is everything to us! But because CDK is “code” you can be lulled into a false sense of security when it comes to refactoring. Simply moving a resource from one stack to another could cause it to be deleted and recreated. Sometimes you can have a beautifully refactored stack, only to learn at deployment time that you have broken something as a resource is referenced in other stacks.&lt;/p&gt;

&lt;p&gt;A rule of thumb we like to follow is this: when a stack has been deployed to production, take extra care when refactoring. Deploy the original version to your own developer account first, and then try to deploy your refactored version. &lt;/p&gt;

&lt;p&gt;Be careful not to change the logical ID of your CDK resources, this will result in a deletion and re-creation of the resource.&lt;/p&gt;

&lt;h3&gt;
  
  
  4. Use the IAM convenience methods
&lt;/h3&gt;

&lt;p&gt;One of the best guard rails provided by CDK are the convenience methods for automatically creating IAM roles. For example if you had a lambda function that you wanted to have read-only access to a single dynamodb table:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="nx"&gt;myDynamoTable&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;grantReadData&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;myLambdaFunction&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Most of these convenience methods start with &lt;code&gt;.grant&lt;/code&gt; making discoverability easy in your IDE.&lt;br&gt;
Using these convenience methods makes it easier to follow the principle of least privilege without having to write your own IAM roles. If you find yourself writing your own roles, think twice and investigate if a convenience method is there instead.&lt;/p&gt;

&lt;h3&gt;
  
  
  5. Use abstractions carefully
&lt;/h3&gt;

&lt;p&gt;There are &lt;a href="https://docs.aws.amazon.com/cdk/v2/guide/constructs.html#constructs_lib"&gt;multiple levels of constructs&lt;/a&gt; that CDK provides:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;L1 - “L1 constructs are exactly the resources defined by AWS CloudFormation”&lt;/li&gt;
&lt;li&gt;L2 - “L2 constructs also represent AWS resources, but with a higher-level, intent-based API. They provide similar functionality, but provide the defaults, boilerplate, and glue logic you'd be writing yourself with a CFN Resource construct”&lt;/li&gt;
&lt;li&gt;L3 - “These constructs are designed to help you complete common tasks in AWS, often involving multiple kinds of resources.” These are more like an opinionated pattern. &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;It is tempting to build a bunch of “L2.5” constructs that are specific to your project. For example ProjectNameS3Bucket. There is a time and a place where you might want to use these in a project, especially when you want to avoid duplicating code. However we recommend that you don't try to do this too early in the project. Try to keep these pointers in mind:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Are you reducing flexibility of the underlying L2 construct? - It’s hard to make one configuration of a bucket useful across multiple stacks in a project.&lt;/li&gt;
&lt;li&gt;Do your default values work for all use cases of your construct? - Could someone change a default property of the shared construct that unknowingly breaks another part of the application?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I hope that you found this helpful. As I said at the beginning there are many strengths with CDK but it is important to be aware of the pitfalls. If you or your team want to learn more about CDK, be sure to check out our &lt;a href="https://instil.co/courses/aws-serverless-typescript/"&gt;TypeScript for AWS Serverless Course&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>cdk</category>
      <category>aws</category>
    </item>
  </channel>
</rss>
