<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Custom Ink Technology</title>
    <description>The latest articles on Forem by Custom Ink Technology (@customink).</description>
    <link>https://forem.com/customink</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/customink"/>
    <language>en</language>
    <item>
      <title>Custom Ink's Kubernetes Journey</title>
      <dc:creator>Martin Bonica</dc:creator>
      <pubDate>Wed, 22 Nov 2023 16:54:04 +0000</pubDate>
      <link>https://forem.com/customink/custom-inks-kubernetes-journey-4n95</link>
      <guid>https://forem.com/customink/custom-inks-kubernetes-journey-4n95</guid>
      <description>&lt;p&gt;&lt;em&gt;This is a crosspost from &lt;a href="https://technology.customink.com/blog/2023/10/09/customink-on-kubernetes/" rel="noopener noreferrer"&gt;Custom Ink's tech blog&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;There's been an elephant in the room during the past few Custom Ink Tech Blog updates. Perhaps it's been alluded to, but we've been consistently putting off addressing it here, simply because of its scale, as well as how many subsequent blog posts could be written wrestling with its implications. (This, of course, means we've also been putting off said subsequent blog posts, because we never got around to writing this one.) &lt;/p&gt;

&lt;p&gt;So, it's time to rip off the proverbial band-aid; the majority of Custom Ink's compute workload is now running on Kubernetes, by way of Amazon EKS. (The rest is, as you might have guessed by reading this blog, on AWS Lambda.) We no longer use Chef, and we no longer use Capistrano to deploy our services and applications directly to EC2 instances. We run everything behind Customink.com without keeping track of individual servers and persistent file systems, and all app-specific infrastructure configuration - OS, libraries, packages, resources allocated, ingress - now lives in the same Git repository as a service's source code, where developers can change them as they see fit.&lt;/p&gt;

&lt;h1&gt;
  
  
  The Journey Begins
&lt;/h1&gt;

&lt;p&gt;Even before Custom Ink moved to the cloud, the pattern of persistent, stateful servers managed by Chef with code deployed by Capistrano served us well. Our infrastructure wasn't exactly ephemeral; it would take some manual action to get a new server up, get a load balancer pointed at it, and tell Chef to tell Capistrano it was ready to deploy to. Servers tended to stick around for a while (so, not ephemeral), and when it was time for an upgrade, we'd upgrade them (so, not immutable either). That said, that pattern stayed with us in the cloud, followed us to a resilient multi-AZ layout, and absolutely beats doing everything by hand.&lt;/p&gt;

&lt;p&gt;In this engineer's opinion, a cultural shift within development is what motivated us to begin the journey to ephemeral and immutable infrastructure. As you might have gleaned from some &lt;a href="https://technology.customink.com/blog/2020/01/03/migrate-your-rails-app-from-heroku-to-aws-lambda/" rel="noopener noreferrer"&gt;past&lt;/a&gt; &lt;a href="https://technology.customink.com/blog/2020/03/13/using-aws-sam-cookiecutter-project-templates-to-kickstart-your-ambda-projects/" rel="noopener noreferrer"&gt;blog&lt;/a&gt; posts, Tech Inkers know a thing or two about Heroku. The venerable PaaS taught a generation of developers that, if they could generate their runtime environment and compute resources from code instead of asking a sysadmin for it, they could deliver more, and do it faster. Over the course of time, everyone arrived at the same conclusion; &lt;em&gt;we can do this faster, and with less work&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;It's important to note the moment in which this sea change took place. These conversations were happening at Custom Ink in late 2019. (Yes, this blog post has been marinating for quite some time.) This mean that the immutable, ephemeral platform we were already dipping our toes into - AWS Lambda - was still a little rough around the edges. We were still getting a hang of how to use lambda layers to bring in binaries we needed for some of our services (say, MySQL and Oracle connectors) into the execution environment. &lt;a href="https://technology.customink.com/blog/2019/04/16/secure-configs-with-aws-ssm-parameter-store-and-rails-on-lambda/" rel="noopener noreferrer"&gt;Secure secret injection out of the box was not really a thing yet.&lt;/a&gt;. Finally, keep in mind that &lt;a href="https://aws.amazon.com/blogs/aws/new-for-aws-lambda-container-image-support/" rel="noopener noreferrer"&gt;Lambda would not support OCI images for another year.&lt;/a&gt; It was (and still is) fertile ground for new development, but we had a backlog of existing services that needed a new home.&lt;/p&gt;

&lt;p&gt;OCI images (&lt;a href="https://github.com/opencontainers/image-spec" rel="noopener noreferrer"&gt;Docker, for the layperson&lt;/a&gt;) and a container orchestration platform seemed to be our quickest way to lift and shift our services to an ephemeral, immutable infrastructure. Thankfully, our applications were already stateless; environment-specific configuration was handled by dotenv, which was happy to ingest values from environment variables, and everything else lived in either databases, memory caches, or S3 buckets. There were few to no persistent files to worry about. Thanks, Rails!&lt;/p&gt;

&lt;p&gt;Even in 2019, there were &lt;a href="https://www.lastweekinaws.com/blog/the-17-ways-to-run-containers-on-aws/" rel="noopener noreferrer"&gt;some&lt;/a&gt; &lt;a href="https://www.lastweekinaws.com/blog/17-more-ways-to-run-containers-on-aws/" rel="noopener noreferrer"&gt;options&lt;/a&gt; for running containers in AWS. Kubernetes won because it had mindshare on the team, and because it was the most flexible. (This last part, as we will soon learn, can be a blessing or a curse.)&lt;/p&gt;

&lt;p&gt;If you're curious, our first customer-facing production service hosted on Kubernetes was the international checkout component of our website, which launched in March of 2020. The web frontend of customink.com was on Kubernetes in June of 2022, and the last to make the leap was the service that handles clipart in the design lab, which moved in January of 2023.&lt;/p&gt;

&lt;h1&gt;
  
  
  EKS Essentials
&lt;/h1&gt;

&lt;p&gt;A familiar feeling to anyone who has spun up an EKS cluster: looking at the console, maybe creating a "hello world" pod and seeing it schedule somewhere, and thinking "OK, what now?".&lt;/p&gt;

&lt;p&gt;Although Fargate, EKS managed node groups, and Karpenter have since come along and made things a bit easier, there's a hard line between resources managed in AWS API and resources behind the API of your Kubernetes cluster. If you want to spin up a pod, you need to connect to the control plane of your EKS cluster and ask for it. That means you have to authenticate to it first. If you want something on the internet to be able to connect to that pod, you need to figure out how to get Kubernetes to tell AWS to point an IP address (or two, or three) at a port where your pod is reachable. If you need to get secrets (say, API tokens or passwords) into the environment of your pod, it's on you to decide how to get that out of your secrets manager of choice. EKS is very hands-off in that regard, unlike other, more opinionated Kubernetes distributions. This is how we chose to instrument our Kubernetes clusters; it is not the only way to do this, and there's certainly more CRDs and daemonsets running than those we listed here, but mastering these were critical to our adoption of Kubernetes.&lt;/p&gt;

&lt;h2&gt;
  
  
  aws-auth
&lt;/h2&gt;

&lt;p&gt;One nice thing EKS comes with out-of-the-box is the &lt;a href="https://github.com/kubernetes-sigs/aws-iam-authenticator#readme" rel="noopener noreferrer"&gt;AWS IAM Authenticator for Kubernetes&lt;/a&gt;. It allows us to map IAM principals to users within Kubernetes. This is handy; it means we don't have to worry about giving everyone a Kubernetes identity, and instead just scope their privileges based on their IAM identity. &lt;/p&gt;

&lt;p&gt;By default, the only user who is allowed to talk to the Kubernetes control plane is the IAM principal that created the cluster. Even if that principal was a role assumed by a group of people, that's less than ideal; after all, from Kubernetes' perspective, that means there's only one user, and no way to reduce the scope of that user's access (which is, of course, full admin). Thankfully, the IAM authenticator gives us one more thing; the ability to edit a ConfigMap called &lt;code&gt;aws-auth&lt;/code&gt; to associate IAM principals with groups, and groups with Roles or ClusterRoles. This means we can have an IAM role called "ReadOnly" that we map to a Kubernetes role that can only do "get" actions, or an IAM role called "PowerUser" that's allowed to restart deployments - you get the picture. This is how we map external entities, such as engineers, developers, or external software that operates on the Kubernetes API, to Kubernetes RBAC. All the calling entities need to do is run or implement the &lt;a href="https://docs.aws.amazon.com/eks/latest/userguide/install-aws-iam-authenticator.html" rel="noopener noreferrer"&gt;AWS IAM Authenticator&lt;/a&gt; process, which in a nutshell uses IAM credentials to call the AWS EKS API. The caller gets a temporary session token in return, which is then good for talking to the Kubernetes control plane.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvhev62hmlrsavm1qsqrh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvhev62hmlrsavm1qsqrh.png" alt="mermaid.js flow chart of AWS IAM Authenticator"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://mermaid.live/edit#pako:eNp1klFvmzAQx7_KyS95GEFAEqB-6IS6qou6SpHSadrEiweXgAJ2ZhtlLMp33xlYq6idX2yd7373_5_uzApVIuPM4K8OZYGfarHXos0l0PlqUM9vbz9k37Zw_7iFB7TzZ3VACdlmzeEzapwZaHtYZ09QaCxR2lo0xoc7IWENe7QgwA4VVoEVzcHdj91P1BItmo9jn3f5rrETwGHbafSgGrtNvGuBr8RR2XNVm6mvEb2B9ayFFn3YNCgMOqnCIqGOqvRH0jXBMZ2nrLOV81QIq0hIphEo0INoGnXC0pkpyVcl7OTkTZEjiZOZCwpCoeSu3rfiyOELjaYlJRUWB8d8_ZsEvS16z-h3NN4gaUbSakmW9lp1RzjVVClAqwYHeaNi8z-vG1XCnZKW8hs38Qf1b0ZDsRsT81iLuhV1SdtydqCcUeMWc8bpWeJOdI3NWS4vlErK1baXBeNWd-ix7lgSbVqu6-B9WdOcGN_R4lDwKOQPpV5ycPh9Gpd02NUhhfEz-814nPhRFCXhTRBFYbxcLjzWMx7GqR-tgjRO43iZJFESXTz2Z4CG_mIRBkEQBWm6WsbJanH5C_St-TM" rel="noopener noreferrer"&gt;View on mermaid.live&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Ingress
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://technology.customink.com/blog/2020/06/11/simplifying-custominks-http-accelerator-with-aws-cloudfront-and-application-load-balancer/" rel="noopener noreferrer"&gt;As we've written before&lt;/a&gt;, we make use of path-based routing for some (but not all) of our customer-facing applications. Thankfully, making this change on the Kubernetes side was not a problem. The Kubernetes Ingress API allows for rule-based routing, and the &lt;a href="https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.6/" rel="noopener noreferrer"&gt;AWS Load Balancer Controller&lt;/a&gt; allows us to implement the ingress spec by creating and controlling Application Load Balancers. We use the AWS Load Balancer to deploy a new IngressClass - let's called it the "shared ingress" - which represents a load balancer with no rules, just various ACM certificates and security settings. Each individual application then is responsible for declaring an ingress, specifying what paths and hostnames they listen on, using using the aforementioned shared ingress as the IngressClass. The AWS Load Balancer Controller sees that an Ingress is using an IngressClass it created, and adds the corresponding rules to the "shared" load balancer. This allows us to control load balancers however we want, and even share them, from within the Kubernetes API.&lt;/p&gt;

&lt;h2&gt;
  
  
  External Secrets
&lt;/h2&gt;

&lt;p&gt;There are a lot of ways and places to manage secrets, but our biggest requirement is for secrets to be encrypted at rest, restricted by IAM policies, and managed somewhere other than the Kubernetes control plane. At this time, we are using SecureString parameters in AWS Systems Manager Parameter Store. The parameters are encrypted with KMS, and IAM limits the principals that can decrypt them.&lt;/p&gt;

&lt;p&gt;We elected to use the &lt;a href="https://external-secrets.io/latest/" rel="noopener noreferrer"&gt;External Secrets Operator&lt;/a&gt; as a mechanism to turn SSM parameters into Kubernetes Secret resources. This tool provides a number of Kubernetes Custom Resource Definitions (that is, new objects for the API to use) which allow us to access external secret stores. The key objects here are SecretStore and ExternalSecret resources. The SecretStore object instructs the ESO to connect to a secrets backend of choice. An ExternalSecret object can then be created, referencing the SecretStore we created, asking for certain values to be pulled from that backend. (In our case, the SecretStore is pointed at AWS SSM.) When an ExternalSecret is created or modified, the ESO will connect to the secrets backend, retrieve the secrets, and create a Kubernetes Secret object populated with the values. We can them use this Secret as we would any other Secret or ConfigMap, mounting it on a pod's filesystem or exposing it as environment variables to our application can use the secrets.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2811ppkqffo2dx1y3yaa.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2811ppkqffo2dx1y3yaa.png" alt="mermaid.js flow chart of External Secrets Operator and AWS SSM Parameter Store"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://mermaid.live/edit#pako:eNqNksFu2zAMhl-F4KWHuoGtJLanQ4sBK9oeghYI0EPhC2cxiTFbSiU5bRrk3Ss7TpZtGDadBIr8-P8Ud1gaxSjR8WvLuuRvFS0tNYWGcG7fPVtN9ZxLy_7q-vrycJt7Y1nCA2hmBRuqW3awsKYBv2JwfY6D71T-YK1GB9ZZZQc6ooe4g8c1WwqvEu4MLNn3qN_RvyL_yuiVzmfwRMEKhyQYFN-z5QsHzRYevs4gFCkXQUk6WFnRhrse7mTgZhD-J-gfBrousDUtLM1_Ke1jYZwXdd1pIs9Agwh4q_zqbBTns7zqip-MkgG_Nj91w6YiaEyrPRgLrDfPZDHChm1DlQp_vesoBQZqwwXKcFW8oLb2BRZ6H1Kp9Wa-1SVKb1uOsF2rIGpYDZQLqt0pequq4OMUXJN-MaY5VnL_OjvsWL9qfQrKHb6jTLORECJLvsRCJOlkMo5wizJJ85GYxnmap-kky0Qm9hF-9NBkNB4ncRyLOM-nkzSbjvefcnXqaA" rel="noopener noreferrer"&gt;View on mermaid.live&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Service Accounts and IAM
&lt;/h2&gt;

&lt;p&gt;As you might have inferred by now, there's lots of pods within our Kubernetes cluster that need IAM permissions; either to perform infrastructure actions to support our services (create ALBs, get secrets, etc) or in the normal course of a service operating (say, we're uploading a file to S3 or sending a message down an SQS queue). While permissions associated with the IAM role of a Kubernetes worker node can be inherited by all the pods on them, we really don't want that; after all, that would mean everything on the cluster would have the same role, with very broad access. We would prefer that applications running in Kubernetes be scoped to their own IAM role. Thanks to the OIDC capabilities of EKS and IAM, this is pretty easy to do.&lt;/p&gt;

&lt;p&gt;EKS clusters come with their own unique OIDC URLs. These URLs, along with a unique thumbprint associated with them, allow &lt;a href="https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_providers_create_oidc.html" rel="noopener noreferrer"&gt;IAM OIDC providers&lt;/a&gt; to verify that a caller from outside of AWS (say, the EKS control plane) is who it claims to be. We can then add the OIDC provider for our new Kubernetes cluster to the trust relationship of an IAM role. (We can scope the trust relationship further, to namespace and service account name, if we really don't want the app that uploads clipart to be able to assume the role that lets it play with load balancers.)&lt;/p&gt;

&lt;p&gt;With this in place, all we need to do is create a ServiceAccount object, add the IAM role we want to assume as an annotation, and associate it with our pods.&lt;br&gt;
Some more EKS-specific magic then takes place in the background; the &lt;a href="https://github.com/aws/amazon-eks-pod-identity-webhook" rel="noopener noreferrer"&gt;AWS EKS Pod Identity Webhook&lt;/a&gt; sees that a pod is associated with a ServiceAccount that wants to use an IAM role, and makes an AWS STS call to IAM asking for a temporary web identity token for that role. If OIDC was set up right, it gets a valid web identity token, and mounts it, along with the name of the region it's running in, as environment variables in the pod. At that point, your code, and the AWS SDK (assuming you're on a version new enough to recognize web identity tokens) will run as if it's on an EC2 instance with an IAM role.&lt;/p&gt;

&lt;p&gt;This way, we can associate pods with different IAM roles, and prevent services from encroaching on eachother's permissions.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx1vu838mqr9fuin7a35r.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx1vu838mqr9fuin7a35r.png" alt="mermaid.js flow chart of AWS EKS using the Pod Identity Webhook to grant a pod an IAM role"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://mermaid.live/edit#pako:eNp9k0GP2jAQhf_KyJc9NEUhQEJz2ArtVloOq25L1VUrLsYZEovETm2HbYr47ztOgF0Eak6OPe_Tmzf2jgmdIUuZxT8NKoH3kueGV0sF9D3p7OPt7YcFmq0UOBNCN8qlMIcXrhw4DSsEvirRLzMN89kjuEKq3A56_bnQo4gI8wyVk66FZ1wVWm_egIW0UFMF4bi1TYX9lucaXeLg5OqC4dmz5wUsfixSuOOKkDk64PCCK5DHYqc3qGCtTc_1zM8986D1mK_z-zt4MnpLMpPCA7YBUHGv8ClZ4grX8LJsQeiK-oW10RVVIIiysQ7NgXqGOrP4C21AHgVBMPPKClZcbICr7q8Fy2UG0lEwtju-sOlD-U4NUHoHc7WRSsial0BU_YLvc3zX6lF46cc30Fl-IxFWqu7AGeqMAii5k1rZQtYXlq4P9wEN3lhodWOuTON_Mz0w0_lNBaLgKvdRd5yKK7n2g6AOK3-3OotXZu3zNJiTY98Hqu1PbuzZ5e78f1scBl30ZukC0R5UaC3PkQWsQlPRROih7Lx6yfxMcMlSWma45k3plmyp9lTKG6cXrRIspcgwYE2dcXd8Vyxd89Kedr9k0mlz2qy5-q11dVRid_rYP9DunXYlLN2xvyyNk0EURcnwUxhFw3g8HgWsZekwng6iSTiNp3E8TpIoifYB-9dBh4PRaBiGYRROp5NxnExG-1fmE039" rel="noopener noreferrer"&gt;View on mermaid.live&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  The Deployment Pipeline
&lt;/h1&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo5ow5jp7ffr8pn0e1rlz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo5ow5jp7ffr8pn0e1rlz.png" alt="mermaid.js flow chart of Custom Ink's Deployment pipeline to Kubernetes using ArgoCD"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://mermaid.live/edit#pako:eNqNkk9vozAQxb-K5ROV0ghIApTDSikku9k0UpRUqlT54sJArcIYGbPabJrvXpM_hLaXPWHGvzfvjTV7msgUaEhzxatX8hgznFobqGQttFS7m9vbH-8rUDmQ9eb93rpvRJGStaigEAhkKZI3mWUtFTGsm5dTl2jRIQyjsyiWyRuoTBTQ4sYnth6h1nX7N2M4s2KoCrkjtea5wJwA_hFKYgmoW2TOcH5BKiXTJtFC4ieKIWDKsBdk-rQlAjPFa62MoFFtHtNsZc2ijeEX7Znhw-nzVb09J9FSmlFyk9FwP61l8wIKQUNNVhwzYYYgvRcjBiK_rIt4qnIZxe0Ei6527XDTGp59e87r64Cd-dy0-P1f5kurp7_6P_TL3yPQAS1BlVykZhv2DAlhVL9CCYyG5phCxptCM8rwYFDeaLndYUJD87IwoE2Vcg2x4GaAkoYZL-quOkvbcF2x4vgsZXlRwvF2ddrC4zIeERru6V8aev7QdV3fubNd1_HG49GA7mjoeMHQndiBF3je2Pdd3z0M6L9jU2c4Gjm2bbt2EEzGnj-ZHD4ACHH6Cw" rel="noopener noreferrer"&gt;View on mermaid.live&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Early on, we elected to come up with a solution that was CI tooling-agnostic; whatever we did would not be bound to Jenkins, CircleCI, TravisCI, GitHub Actions, or the crontab running on the gaming PC in the closet. We didn't want to anchor ourselves to any of those things. Instead, we packaged all of the various scripts needed for our deployment process into a Ruby package we refer to internally as "KTool". It is very opinionated shim layer between our code repositories and the tools we use to build and push the Dockerfile, generate the Kubernetes YAML, and get that YAML applied to the Kubernetes cluster. KTool can be pulled into the pipeline, as a gem or a docker image, and be called as an executable by whatever CI tool elects to bring it in. This gives us the added ability to change and add more features to KTool without having to chase around various GitHub Actions, CircleCI orbs, or other tooling-specific components.&lt;br&gt;
The above diagram is a bit of a simplification; depending on the service and the workflow, there's any number of various test/validate/wait steps that can fit in at any time. There's also a separate process for deploying feature branches to atomic dev environments, but that warrants its own blog post.&lt;/p&gt;

&lt;h2&gt;
  
  
  Dockerfile
&lt;/h2&gt;

&lt;p&gt;The Dockerfile lives in the Git repository, so developers can add dependencies as needed. We do single-stage builds; that is, we build the Dockerfile once, and reuse the image as we promote it from dev to staging and prod. That way, we're certain that the artifacts going to prod are the same as the artifacts that we tested in staging. We Docker images in an Amazon Elastic Container Registry accessible to the dev, staging, and production Kubernetes clusters. As a best practice, we avoid use of the &lt;code&gt;:latest&lt;/code&gt; tag, and instead tag our images with the commit hash of the repository, so we know exactly which image corresponds to which PR. Our repositories are set to immutable, so we can't accidentally overwrite a production image with something else.&lt;br&gt;
From the CI pipeline's point of view, all they're doing is running "KTool build".&lt;/p&gt;

&lt;h2&gt;
  
  
  Kubernetes Manifest
&lt;/h2&gt;

&lt;p&gt;We don't expect developers to write their entire Kubernetes manifests; even if we did, there would have to be some automation to put the Docker image tag (which, as mentioned above, is a commit hash) into the pod specs. Instead, we ask developers to populate a simple YAML file that answers platform-agnostic questions like "does my webapp have an ingress?", "do I have a background worker too?", "how much RAM does it have?", "what command should it run?", and "what secrets should I bring in?". KTool then picks and choses some &lt;code&gt;.yaml.erb&lt;/code&gt; templates we maintain and populates them with the right values. It then feeds them into Kustomize to collate them all together, insert the image tag and application name, and make a nice, compliant YAML file ready to go into a Kubernetes cluster. It also creates an ArgoCD application definition; more on that later.&lt;br&gt;
This gives us the added advantage of being able to update KTool with new "sane defaults"; say, one day we decide to turn on read-only filesystems, or mandate that everything have a reverse proxy sidecar to do service mesh-y things. We update KTool, and everyone just gets that in their manifest next time they deploy.&lt;/p&gt;

&lt;h2&gt;
  
  
  Deployment to Kubernetes
&lt;/h2&gt;

&lt;p&gt;Once we have the manifest, KTool checks it into yet another Git repository, which holds all of our Kubernetes manifests.&lt;br&gt;
It is important to us that we version-control every manifest that goes into Kubernetes, so we know when an update happened, and how to roll back. It also helps identify config drift if someone did something by hand.&lt;br&gt;
Speaking of config drift: we use &lt;a href="https://argoproj.github.io/cd/" rel="noopener noreferrer"&gt;ArgoCD&lt;/a&gt; as our means of actually getting our manifests applied to the cluster. It is pointed at the aforementioned repository full of YAML manifests, and as soon as something changes in the repository, it makes it so in Kubernetes. Not only does this mean that changes are automatically applied, but it reverts config drift, heals resources that were deleted by mistake, and provides a friendly GUI for developers looking to see how their services are behaving. &lt;br&gt;
This way, even if we somehow accidentally lose an entire EKS cluster, we can be confident that everything will come back if we install ArgoCD and point it at the repository.&lt;/p&gt;

&lt;h1&gt;
  
  
  Impact
&lt;/h1&gt;

&lt;p&gt;The unlocks from our move to Kubernetes are hard to count. Some of these deserve their own blog posts, but here's a few quick benefits in a nutshell.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Developers can update their runtime dependencies. Moving to a new version of Ruby or Python is as easy as updating the Dockerfile. No more need to get a server built or modified.&lt;/li&gt;
&lt;li&gt;Developers can install dependencies for their runtime. Need a specific library or binary? Go in the Dockerfile and install it. No more need to ask someone to update a Chef cookbook.&lt;/li&gt;
&lt;li&gt;Since pods just grow right back if they're disrupted, and we don't store any state on the hard drive, there's absolutely no reason why we can't use spot instances... so, our entire dev and staging environments are running on spot requests!&lt;/li&gt;
&lt;li&gt;If it's time to update an AMI (which, in this case, really means the AWS-managed EKS worker AMIs), we modify our managed worker groups, and the change is rolled out automatically; pods are rescheduled from old nodes to new nodes, and because we use PodDisruptionBudgets in our Deployment resources, it happens without downtime. Again, because there's nothing persistent on the compute instances other than the Docker images themselves, moving the pods around is trivial.&lt;/li&gt;
&lt;li&gt;We can scale horizontally at the push of a button (by updating the Deployment spec) or automatically (using Horizontal Pod Autoscaler).
Again, this blog post only touches on the surface of our Kubernetes iceberg; there's a tremendous number of little implementation details, process improvements, and discoveries that came as part of this transition. Many of them do deserve their own blog post, and now that we've set some context, those entries can come. Stay tuned!&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>kubernetes</category>
      <category>containers</category>
      <category>aws</category>
      <category>argocd</category>
    </item>
    <item>
      <title>Trigger CircleCI Workflow. AKA Simple Deploy Button</title>
      <dc:creator>Ken Collins</dc:creator>
      <pubDate>Sun, 05 Feb 2023 14:29:28 +0000</pubDate>
      <link>https://forem.com/customink/trigger-circleci-workflow-aka-simple-deploy-button-hf0</link>
      <guid>https://forem.com/customink/trigger-circleci-workflow-aka-simple-deploy-button-hf0</guid>
      <description>&lt;p&gt;Very simple, no parameters needed, no enums, no booleans... just a really easy way to trigger a deploy with CircleCI. We can do this making use of the &lt;a href="https://circleci.com/docs/variables/#pipeline-values" rel="noopener noreferrer"&gt;trigger_source&lt;/a&gt; pipeline value. When you click the button in CircleCI to "Trigger Pipeline" the value would be &lt;code&gt;api&lt;/code&gt; vs something like &lt;code&gt;webhook&lt;/code&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;version&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;2.1&lt;/span&gt;
&lt;span class="na"&gt;jobs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;deploy&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;machine&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ubuntu-2204:current&lt;/span&gt;
    &lt;span class="na"&gt;steps&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;echo 'Deploying...'&lt;/span&gt;
&lt;span class="na"&gt;workflows&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;deploy&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;when&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;{&lt;/span&gt; &lt;span class="nv"&gt;equal&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt; &lt;span class="nv"&gt;api&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="nv"&gt;&amp;lt;&amp;lt; pipeline.trigger_source &amp;gt;&amp;gt;&lt;/span&gt; &lt;span class="pi"&gt;]&lt;/span&gt; &lt;span class="pi"&gt;}&lt;/span&gt;
    &lt;span class="na"&gt;jobs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;deploy&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If your workflow needs a test job, consider doing something a bit more complicated. Here we use two &lt;code&gt;when&lt;/code&gt; conditions to work with a parameter.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;version&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;2.1&lt;/span&gt;
&lt;span class="na"&gt;parameters&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;workflow&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;enum&lt;/span&gt;
    &lt;span class="na"&gt;default&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;test&lt;/span&gt;
    &lt;span class="na"&gt;description&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;The workflow to trigger.&lt;/span&gt;
    &lt;span class="na"&gt;enum&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;test&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="nv"&gt;deploy&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
&lt;span class="na"&gt;jobs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;test-job&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;machine&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ubuntu-2204:current&lt;/span&gt;
    &lt;span class="na"&gt;steps&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;echo 'Testing...'&lt;/span&gt;  
  &lt;span class="na"&gt;deploy-job&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;machine&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ubuntu-2204:current&lt;/span&gt;
    &lt;span class="na"&gt;steps&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;echo 'Deploying...'&lt;/span&gt;
&lt;span class="na"&gt;workflows&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;test&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;when&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;{&lt;/span&gt; &lt;span class="nv"&gt;equal&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt; &lt;span class="nv"&gt;test&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="nv"&gt;&amp;lt;&amp;lt; pipeline.parameters.workflow &amp;gt;&amp;gt;&lt;/span&gt; &lt;span class="pi"&gt;]&lt;/span&gt; &lt;span class="pi"&gt;}&lt;/span&gt;
    &lt;span class="na"&gt;jobs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;test-job&lt;/span&gt;
  &lt;span class="na"&gt;deploy&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;when&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;{&lt;/span&gt; &lt;span class="nv"&gt;equal&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt; &lt;span class="nv"&gt;deploy&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="nv"&gt;&amp;lt;&amp;lt; pipeline.parameters.workflow &amp;gt;&amp;gt;&lt;/span&gt; &lt;span class="pi"&gt;]&lt;/span&gt; &lt;span class="pi"&gt;}&lt;/span&gt;
    &lt;span class="na"&gt;jobs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;deploy-job&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now your CircleCI config will run tests by default and you can easily trigger a deploy via any branch using the "Trigger Pipeline" button.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr5b23gl0yk10sqpv3hur.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr5b23gl0yk10sqpv3hur.png" alt="Screen capture of the CircleCI application. This shows the trigger pipeline UI which has the Add Parameter disclosure open. The options Parameter type, Name, and Value have been set to string, workflow, deploy." width="661" height="355"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>announcement</category>
      <category>devto</category>
      <category>offers</category>
    </item>
    <item>
      <title>New Amazon Linux Dev Container Features</title>
      <dc:creator>Ken Collins</dc:creator>
      <pubDate>Mon, 31 Oct 2022 01:48:24 +0000</pubDate>
      <link>https://forem.com/customink/new-amazon-linux-dev-container-features-3j0c</link>
      <guid>https://forem.com/customink/new-amazon-linux-dev-container-features-3j0c</guid>
      <description>&lt;p&gt;🆕 &lt;strong&gt;Want to use &lt;a href="https://github.com/features/codespaces"&gt;Codespaces&lt;/a&gt; with Amazon Linux 2?&lt;/strong&gt; Check out &lt;a href="https://github.com/customink/codespaces-features"&gt;customink/codespaces-features&lt;/a&gt; for two custom features. 1) &lt;a href="https://github.com/customink/codespaces-features/tree/main/src/common-amzn"&gt;common-amzn&lt;/a&gt; 2) &lt;a href="https://github.com/customink/codespaces-features/tree/main/src/docker-in-docker-amzn"&gt;docker-in-docker-amzn&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;So, last year I shared how we could &lt;a href="https://dev.to/aws-heroes/getting-started-with-github-codespaces-from-a-serverless-perspective-171k"&gt;integrate Codespaces&lt;/a&gt; into our AWS Lambda &lt;a href="https://dev.to/aws-heroes/lambda-containers-with-rails-a-perfect-match-4lgb"&gt;docker compose patterns&lt;/a&gt;. Since then Microsoft's Development Containers specification has come a LONG way. Everything is wrapped up nice and neatly at the &lt;a href="https://containers.dev"&gt;containers.dev&lt;/a&gt; site. Take a look if you have not already seen it.&lt;/p&gt;

&lt;h2&gt;
  
  
  Dev Containers?
&lt;/h2&gt;

&lt;p&gt;So why are Development Containers &amp;amp; Codespaces such a big deal? I can illustrate some Lambda &amp;amp; Kubernetes use cases below, but first I would like to spell out a few features that may be new to some. Including existing Codespaces users.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;The Dev Container &lt;a href="https://containers.dev"&gt;specification&lt;/a&gt; at the lowest level of Codespaces is open to everyone and lots of tooling exists around it by a growing community.&lt;/li&gt;
&lt;li&gt;The specification has a reference implementation via a node &lt;a href="https://dev.toDev%20Container%20CLI"&gt;Command Line Interface&lt;/a&gt;. Think of this &lt;code&gt;devcontainer&lt;/code&gt; CLI as a higher order docker compose. You can use this to run Codespaces projects locally!&lt;/li&gt;
&lt;li&gt;Atop of the CLI, there is CI tooling for &lt;a href="https://github.com/devcontainers/ci"&gt;GitHub Actions&lt;/a&gt;. This means you can use the same development container as your CI/CD environment.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Containers Usage Areas
&lt;/h2&gt;

&lt;p&gt;So where are containers used in your organization or projects?  Here are some phases that most of us can identify with. Where projects move from the left to the right.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--FWFdqDhk--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/mfaiqd32kl7juqatzkgt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--FWFdqDhk--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/mfaiqd32kl7juqatzkgt.png" alt="Container Areas" width="880" height="341"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Development:&lt;/strong&gt; Most of us have tried to use docker or compose at some point. For example, the most common use of this area would be running a database like MySQL. Docker makes this easy.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;CI/CD:&lt;/strong&gt; Typically where we run tests and hopefully build and/or deploy our code to production. If you have used CircleCI before, again a database container here might feel familiar. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Runtime:&lt;/strong&gt; Which is often our final container environment. We can think of this as production for most of us but it could be any container orchestration like Kubernetes, Lambda, or Fargate.&lt;/p&gt;

&lt;h2&gt;
  
  
  Old AWS SAM Patterns with Docker Compose
&lt;/h2&gt;

&lt;p&gt;Today our Lambda SAM cookiecutters leverage SAM's build image via docker compose to ensure local development is within the same environment for our CI/CD tooling. We ended up with something like this image&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--cB-vpIR7--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/brm2tkldsfln5adf75oy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--cB-vpIR7--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/brm2tkldsfln5adf75oy.png" alt="AWS Lambda Before Dev Containers" width="880" height="469"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;At the bottom we can see the host platform typically associated with each of these stages. Because we use Docker, we can be cross-platform and consistent. &lt;strong&gt;The problem?&lt;/strong&gt; ⚠️ Making up your own docker/compose patterns are a huge drag. From SSH patterns to Docker-in-Docker gotchas. &lt;/p&gt;

&lt;h2&gt;
  
  
  New AWS SAM Patterns with Dev Containers
&lt;/h2&gt;

&lt;p&gt;In the coming weeks the &lt;a href="https://github.com/customink/lamby-cookiecutter/tree/master/%7B%7Bcookiecutter.project_name%7D%7D"&gt;Lamby Cookiecutter&lt;/a&gt; will be updated to use Development Containers so folks with (or without) Codespaces can easily work with the project. The result will be something like this.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--eUTmwaFF--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9jxmjokl4nm48mehgzpm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--eUTmwaFF--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9jxmjokl4nm48mehgzpm.png" alt="AWS Lambda After Dev Containers" width="880" height="392"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;With Development Containers, so much docker compose boilerplate can be removed. Thanks in huge part to our newly released &lt;a href="https://github.com/customink/codespaces-features"&gt;common &amp;amp; docker-in-docker Amazon Linux 2 features&lt;/a&gt; work. If you want to see an example of how this helps everyone including running Codespaces locally with VS Code, checkout our &lt;a href="https://github.com/customink/crypteia#development"&gt;Crypteia Project's Development&lt;/a&gt; section. You can even use all this without VS Code &amp;amp; GitHub Codespaces. For example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;devcontainer build &lt;span class="nt"&gt;--workspace-folder&lt;/span&gt; &lt;span class="nb"&gt;.&lt;/span&gt;
devcontainer up &lt;span class="nt"&gt;--workspace-folder&lt;/span&gt; &lt;span class="nb"&gt;.&lt;/span&gt;
devcontainer run-user-commands &lt;span class="nt"&gt;--workspace-folder&lt;/span&gt; &lt;span class="nb"&gt;.&lt;/span&gt;
devcontainer &lt;span class="nb"&gt;exec&lt;/span&gt; &lt;span class="nt"&gt;--workspace-folder&lt;/span&gt; &lt;span class="nb"&gt;.&lt;/span&gt; ./bin/setup
devcontainer &lt;span class="nb"&gt;exec&lt;/span&gt; &lt;span class="nt"&gt;--workspace-folder&lt;/span&gt; &lt;span class="nb"&gt;.&lt;/span&gt; ./bin/test-local
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Unexplored Development Container Space
&lt;/h2&gt;

&lt;p&gt;So can Development Containers be used in your projects without the Lambda patterns above? Yes! Consider the following diagram that has a Platform Engineering team building base images. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Sx2gpWxy--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/fklx3ef7j0ypbnf3fff7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Sx2gpWxy--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/fklx3ef7j0ypbnf3fff7.png" alt="Where could Development Container fit into your Kubernetes/K8s Projects" width="880" height="520"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;These teams typically approach containers from the right to the left. Where base OS images are made into language specific images with variants for CI/CD. Just like SAM has build and runtime images. Technically for them "Runtime" is some container registry like Amazon ECR, but you get the idea.&lt;/p&gt;

&lt;p&gt;At &lt;a href="https://www.customink.com"&gt;Custom Ink&lt;/a&gt; we are using our CircleCI images for our Kubernetes projects with Codespaces. The Microsoft team makes this easy since all of their features work with Ubuntu out of the box.&lt;/p&gt;

&lt;p&gt;If your development stages look something like the image above. Please consider adopting Development Containers based on your CI/CD images and explore that big purple space by connecting your container value chain in a beautiful little circle. Thanks for reading!&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Lambda Rust Extension for any Runtime to preload SSM Parameters as Secure Environment Variables!</title>
      <dc:creator>Ken Collins</dc:creator>
      <pubDate>Wed, 29 Jun 2022 12:24:04 +0000</pubDate>
      <link>https://forem.com/customink/rust-lambda-rust-extension-for-any-runtime-to-preload-ssm-parameters-as-secure-environment-variables-21fg</link>
      <guid>https://forem.com/customink/rust-lambda-rust-extension-for-any-runtime-to-preload-ssm-parameters-as-secure-environment-variables-21fg</guid>
      <description>&lt;p&gt;ℹ️ &lt;a href="https://github.com/customink/crypteia"&gt;Crypteia Hits v1.0.0 Miletstone!&lt;/a&gt; 🎉 - It now has support for Python among other popular languages like Ruby, Node, &amp;amp; PHP. Crypteia is easy to install as a Lambda Layer or in a Container. It can even be used with K8s containers!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/customink/crypteia"&gt;Crypteia&lt;/a&gt; is a new super fast Lambda Extension written in Rust which turns your serverless environment variables from SSM Parameter Store paths like these...&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;Environment&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;Variables&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;SECRET&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;x-crypteia-ssm:/myapp/SECRET&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;... into real environment variables when using your Runtime's language of choice. For example, assuming the SSM Parameter path above returns &lt;code&gt;1A2B3C4D5E6F&lt;/code&gt; as the value. Your code's environment variable methods would return that same value.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;process.env.SECRET   // 1A2B3C4D5E6F
ENV['SECRET']        # 1A2B3C4D5E6F
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It works by using a shared object library via the &lt;code&gt;LD_PRELOAD&lt;/code&gt; environment variable in coordination with our &lt;a href="https://docs.aws.amazon.com/lambda/latest/dg/runtimes-extensions-api.html"&gt;Lambda Extension&lt;/a&gt; binary that loads all Parameter Store values within a few milliseconds of your function starting up.&lt;/p&gt;

&lt;h2&gt;
  
  
  Installation
&lt;/h2&gt;

&lt;p&gt;When building your own Lambda Containers, &lt;a href="https://github.com/customink/crypteia/releases"&gt;download&lt;/a&gt; both the &lt;code&gt;crypteia&lt;/code&gt; binary and &lt;code&gt;libcrypteia.so&lt;/code&gt; shared object files that match your platform from our &lt;a href="https://github.com/customink/crypteia/releases"&gt;Releases&lt;/a&gt; page. Target platforms include the following using these naming conventions.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Amazon Linux 2: &lt;code&gt;crypteia-amzn.zip&lt;/code&gt; &amp;amp; &lt;code&gt;libcrypteia-amzn.zip&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Debian, Ubuntu, Etc: &lt;code&gt;crypteia-debian.zip&lt;/code&gt; &amp;amp; &lt;code&gt;libcrypteia-debian.zip&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;⚠️ When building your own Lambda Containers, please make sure &lt;a href="https://www.gnu.org/software/libc/"&gt;glibc&lt;/a&gt; is installed since this is used by &lt;a href="https://github.com/geofft/redhook"&gt;redhook&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;⚠️ For now our project supports the &lt;code&gt;x86_64&lt;/code&gt; architecture, but we plan to release &lt;code&gt;arm64&lt;/code&gt; variants soon. Follow or contribute in our &lt;a href="https://github.com/customink/crypteia/issues/5"&gt;GitHub Issue&lt;/a&gt; which tracks this topic.&lt;/p&gt;

&lt;p&gt;Once these files are downloaded, they can be incorporated into your &lt;code&gt;Dockerfile&lt;/code&gt; file like so:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight docker"&gt;&lt;code&gt;&lt;span class="k"&gt;RUN &lt;/span&gt;&lt;span class="nb"&gt;mkdir&lt;/span&gt; &lt;span class="nt"&gt;-p&lt;/span&gt; /opt/lib
&lt;span class="k"&gt;RUN &lt;/span&gt;&lt;span class="nb"&gt;mkdir&lt;/span&gt; &lt;span class="nt"&gt;-p&lt;/span&gt; /opt/extensions
&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; crypteia /opt/extensions/crypteia&lt;/span&gt;
&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; libcrypteia.so /opt/lib/libcrypteia.so&lt;/span&gt;
&lt;span class="k"&gt;ENV&lt;/span&gt;&lt;span class="s"&gt; LD_PRELOAD=/opt/lib/libcrypteia.so&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Lambda Layer
&lt;/h4&gt;

&lt;p&gt;Our Amazon Linux 2 files can be used within a &lt;a href="https://docs.aws.amazon.com/lambda/latest/dg/configuration-layers.html"&gt;Lambda Layer&lt;/a&gt; that you can deploy to your own AWS account. You can use this project to build, publish, and deploy that layer since it has the SAM CLI installed. All you need to do is supply your own S3 bucket. For example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;aws configure
./amzn/setup
&lt;span class="nv"&gt;S3_BUCKET_NAME&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;my-bucket ./layer/deploy
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Usage
&lt;/h2&gt;

&lt;p&gt;First, you will need your secret environment variables setup in &lt;a href="https://docs.aws.amazon.com/systems-manager/latest/userguide/systems-manager-parameter-store.html"&gt;AWS Systems Manager Parameter Store&lt;/a&gt;. These can be whatever &lt;a href="https://docs.aws.amazon.com/systems-manager/latest/userguide/sysman-paramstore-hierarchies.html"&gt;hierarchy&lt;/a&gt; you choose. Parameters can be any string type. However, we recommend using &lt;code&gt;SecureString&lt;/code&gt; to ensure your secrets are encrypted within AWS. For example, let's assume the following paramter paths and values exists.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;/myapp/SECRET&lt;/code&gt; -&amp;gt; &lt;code&gt;1A2B3C4D5E6F&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;/myapp/access-key&lt;/code&gt; -&amp;gt; &lt;code&gt;G7H8I9J0K1L2&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;/myapp/envs/DB_URL&lt;/code&gt; -&amp;gt; &lt;code&gt;mysql2://u:p@host:3306&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;/myapp/envs/NR_KEY&lt;/code&gt; -&amp;gt; &lt;code&gt;z6y5x4w3v2u1&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Crypteia supports two methods to fetch SSM parameters:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;code&gt;x-crypteia-ssm:&lt;/code&gt; - Single path for a single environment variable.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;x-crypteia-ssm-path:&lt;/code&gt; - Path prefix to fetch many environment variables.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Using whatever serverless framework you prefer, setup your function's environment variables using either of the two SSM interfaces from above. For example, here is a environment variables section for an &lt;a href="https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/serverless-getting-started.html"&gt;AWS SAM&lt;/a&gt; template that demonstrates all of Crypteia's features.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;Environment&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;Variables&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;SECRET&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;x-crypteia-ssm:/myapp/SECRET&lt;/span&gt;
    &lt;span class="na"&gt;ACCESS_KEY&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;x-crypteia-ssm:/myapp/access-key&lt;/span&gt;
    &lt;span class="na"&gt;X_CRYPTEIA_SSM&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;x-crypteia-ssm-path:/myapp/envs&lt;/span&gt;
    &lt;span class="na"&gt;DB_URL&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;x-crypteia&lt;/span&gt;
    &lt;span class="na"&gt;NR_KEY&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;x-crypteia&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;When your function initializes, each of the four environmet variables (&lt;code&gt;SECRET&lt;/code&gt;, &lt;code&gt;ACCESS_KEY&lt;/code&gt;, &lt;code&gt;DB_URL&lt;/code&gt;, and &lt;code&gt;NR_KEY&lt;/code&gt;) will return values from their respective SSM paths.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;process.env.SECRET;       // 1A2B3C4D5E6F
process.env.ACCESS_KEY;   // G7H8I9J0K1L2
process.env.DB_URL;       // mysql2://u:p@host:3306
process.env.NR_KEY;       // z6y5x4w3v2u1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here are a few details about the internal implementation on how Crypteia works:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;When accessing a single parameter path via &lt;code&gt;x-crypteia-ssm:&lt;/code&gt; the environment variable name available to your runtime is used as is. No part of the parameter path effects the resulting name.&lt;/li&gt;
&lt;li&gt;When using &lt;code&gt;x-crypteia-ssm-path:&lt;/code&gt; the environment variable name can be anything and the value is left unchanged.&lt;/li&gt;
&lt;li&gt;The parameter path hierarchy passed with &lt;code&gt;x-crypteia-ssm-path:&lt;/code&gt; must be one level deep and end with valid environment variable names. These names must match environement placeholders using &lt;code&gt;x-crypteia&lt;/code&gt; values.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;For security, the usage of &lt;code&gt;DB_URL: x-crypteia&lt;/code&gt; placeholders ensures that your application's configuration is in full control on which dynamic values can be used with &lt;code&gt;x-crypteia-ssm-path:&lt;/code&gt;.&lt;/p&gt;

&lt;h4&gt;
  
  
  Lambda Layer
&lt;/h4&gt;

&lt;p&gt;Shown below is a simple Node.js 16 function which has the appropriate IAM Permissions and Crypteia Lambda Layer added. Also configured are the needed &lt;code&gt;LD_PRELOAD&lt;/code&gt; and &lt;code&gt;SECRET&lt;/code&gt; environment variables. The code of this function log the value of the &lt;code&gt;process.env.SECRET&lt;/code&gt; which does correctly resolve to the value within SSM Parameter Store.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--9wGTCEgn--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/k38z2bip4d7rb58a5fxv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--9wGTCEgn--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/k38z2bip4d7rb58a5fxv.png" alt="Screenshot of the Environment variables in the AWS Lambda Console showing  raw `LD_PRELOAD` endraw  to  raw `/opt/lib/libcrypteia.so` endraw  and  raw `SECRET` endraw  to  raw `x-crypteia-ssm:/myapp/SECRET` endraw ." width="880" height="211"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--kVeBvXE0--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qma1xry8evggassqzp0e.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--kVeBvXE0--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qma1xry8evggassqzp0e.png" alt="Screenshot of Code source in the AWS Lambda Console showing the  raw `body` endraw  results of  raw `1A2B3C4D5E6F` endraw  which is resolved from SSM Parameter Store." width="880" height="287"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Thank You 💞
&lt;/h2&gt;

&lt;p&gt;Let me know if you find Crypteia useful or have any questions.&lt;/p&gt;

</description>
      <category>serverless</category>
      <category>rust</category>
      <category>aws</category>
      <category>security</category>
    </item>
    <item>
      <title>Getting Started with GitHub Codespaces from a Serverless Perspective</title>
      <dc:creator>Ken Collins</dc:creator>
      <pubDate>Sun, 06 Feb 2022 22:37:15 +0000</pubDate>
      <link>https://forem.com/customink/getting-started-with-github-codespaces-from-a-serverless-perspective-51nc</link>
      <guid>https://forem.com/customink/getting-started-with-github-codespaces-from-a-serverless-perspective-51nc</guid>
      <description>&lt;p&gt;If you are into Serverless and AWS Lambda, you may already know that the &lt;a href="https://aws.amazon.com/serverless/sam/"&gt;AWS Serverless Application Model (SAM)&lt;/a&gt; CLI makes it easy to leverage their Docker &lt;a href="https://github.com/aws/aws-sam-build-images"&gt;build images&lt;/a&gt; as development containers. We do exactly this for our &lt;a href="https://lamby.custominktech.com/"&gt;Rails &amp;amp; Lambda&lt;/a&gt; projects.&lt;/p&gt;

&lt;p&gt;Leveraging Docker with SAM ensures we have a Linux environment and versioned dependencies that closely mimic the Lambda Runtime or Container being shipped. The use and &lt;a href="https://dev.to/quinncuatro/the-promise-of-docker-containers-57fd"&gt;The Promise of Docker&lt;/a&gt; to solve these problems is nothing new... but something else is.&lt;/p&gt;

&lt;h2&gt;
  
  
  ✨ The Rise of Ephemeral Dev Environments
&lt;/h2&gt;

&lt;p&gt;A few weeks ago GitHub's engineering team released an &lt;a href="https://github.blog/2021-08-11-githubs-engineering-team-moved-codespaces/"&gt;in-depth article&lt;/a&gt; announcing their internal usage of the now generally available &lt;a href="https://github.com/features/codespaces"&gt;GitHub Codespaces&lt;/a&gt;. Since Custom Ink shares many of the same problems described in this post, I was curious if our Lambda projects could easily leverage Codespaces. But what is this new tool? Where did it come from? And what is this &lt;code&gt;devcontainer.json&lt;/code&gt; file?&lt;/p&gt;

&lt;p&gt;As best I can tell this all started in May of 2019 when the VS Code team first mentioned their &lt;a href="https://code.visualstudio.com/blogs/2019/05/02/remote-development"&gt;remote development extensions&lt;/a&gt;. About a year later this content was rolled up into the &lt;a href="https://code.visualstudio.com/docs/remote/remote-overview"&gt;VS Code Remote Development&lt;/a&gt; guides we have today.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--9JToGH-g--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/21j55qowytmm548xtrgz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--9JToGH-g--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/21j55qowytmm548xtrgz.png" alt="VS Code Remote Development Architecture Diagram" width="880" height="284"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Prior to Codespaces, we have had a clear leader in the automated development environment space with &lt;a href="https://www.gitpod.io"&gt;Gitpod&lt;/a&gt;. It was even featured in a January 2021 episode of &lt;a href="https://www.youtube.com/watch?v=rjDDAFHEYEc&amp;amp;list=PLehXSATXjcQFHpz-HAO8YOC6EqFScEz27"&gt;Containers from the Couch&lt;/a&gt;. Gitpod leverages the same technology built into VS Code for remote development.&lt;/p&gt;

&lt;p&gt;However, sometimes slow and steady wins the race. If this were ever true for GitHub-based projects, I think we have a huge winner with GitHub Codespaces. Keep reading below on how your company (or you) could get started. I will even cover how well Codespaces has worked for our Lambda projects that use an existing Docker in Docker development pattern.&lt;/p&gt;

&lt;h2&gt;
  
  
  ⚙️ GitHub Settings
&lt;/h2&gt;

&lt;p&gt;GitHub Codespaces is ONLY available now for GitHub Teams &amp;amp; Enterprise Cloud plans. It is not yet available for public repositories. If you are an administrator of such an account, here are a few things I did &lt;a href="https://docs.github.com/en/codespaces/managing-codespaces-for-your-organization"&gt;at the organization level&lt;/a&gt; to get started experimenting.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://docs.github.com/en/codespaces/managing-codespaces-for-your-organization/enabling-codespaces-for-your-organization#setting-a-spending-limit"&gt;Enable Codespaces&lt;/a&gt;: This can also be disabled completely or enabled for select users.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://docs.github.com/en/codespaces/managing-codespaces-for-your-organization/managing-repository-access-for-your-organizations-codespaces"&gt;Repository Access&lt;/a&gt;: You can even limit repositories that are able to use Codespaces. If your GitHub account leverages permissions &amp;amp; teams, remember, Codespaces (via the generated &lt;code&gt;GITHUB_TOKEN&lt;/code&gt; will not grant anyone elevated permissions to other repositories.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://docs.github.com/en/billing/managing-billing-for-github-codespaces/managing-spending-limits-for-codespaces"&gt;Manage Spending Limits&lt;/a&gt;: It would have been neat to see a way to limit which VMs (vCPU/Memory) options could have been used here.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://docs.github.com/en/codespaces/managing-codespaces-for-your-organization/managing-encrypted-secrets-for-your-repository-and-organization-for-codespaces#adding-secrets-for-an-organization"&gt;Organizational Secrets&lt;/a&gt;: Create any secrets your organization needs to enable individuals to work. Remember, Codespaces secrets can be set at the repository or even user level too. Pick the one(s) that work the best for y'all.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  🔰 Developer Tips
&lt;/h2&gt;

&lt;p&gt;It could go without saying but getting good at Codespaces for most may mean getting good at VS Code. Technically you could bring your own editor like Vim or Emacs. But trust me, as a recent Sublime Text convert, switching to VS Code is worth it. Make sure to take the time to Google, learn, and in some cases &lt;a href="https://github.com/Microsoft/vscode-sublime-keybindings"&gt;install packages&lt;/a&gt; that make the transition easier.&lt;/p&gt;

&lt;h3&gt;
  
  
  Dotfiles &amp;amp; Settings
&lt;/h3&gt;

&lt;p&gt;Remote development needs to feel local! Everything that makes your editor &amp;amp; terminal productive needs to be available to you. As described in the &lt;a href="https://docs.github.com/en/codespaces/customizing-your-codespace/personalizing-codespaces-for-your-account"&gt;Personalizing Codespaces&lt;/a&gt; guide setting up your Dotfiles was high on my list.&lt;/p&gt;

&lt;p&gt;For years I have maintained a personal Zshkit which had a ton of personal functions and aliases. When moving to Codespaces, I took the time to clean them up and create a &lt;code&gt;github.com/metaskills/dotfiles&lt;/code&gt; repository, cloned it locally and hooked it up to my ZSH (default shell on Mac) &lt;code&gt;~/.zshrc&lt;/code&gt; file. Codespaces will automatically clone this repo when creating a Codespace and install it by running the &lt;code&gt;install.sh&lt;/code&gt; script. Example.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$CODESPACES&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"true"&lt;/span&gt; &lt;span class="o"&gt;]&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;then
  &lt;/span&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"source /workspaces/.codespaces/.persistedshare/dotfiles/rc"&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&amp;gt;&lt;/span&gt; &lt;span class="nv"&gt;$HOME&lt;/span&gt;/.zshrc
  &lt;span class="nb"&gt;sudo &lt;/span&gt;chsh &lt;span class="nt"&gt;-s&lt;/span&gt; /usr/bin/zsh
&lt;span class="k"&gt;fi&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can leverage the &lt;code&gt;CODESPACES&lt;/code&gt; environment variable to do any customization per environment. Also, do not forget to use &lt;a href="https://code.visualstudio.com/docs/editor/settings-sync"&gt;Settings Sync&lt;/a&gt;. I think this is only needed if you use VS Code's web-based editor. More on that topic later.&lt;/p&gt;

&lt;h3&gt;
  
  
  Your Codespaces Settings
&lt;/h3&gt;

&lt;p&gt;You can &lt;a href="https://docs.github.com/en/codespaces/managing-your-codespaces"&gt;Manage Your Codespaces&lt;/a&gt; settings at somewhat the same level as the organization. Here are a few settings I did.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Access &amp;amp; Security: Set this to "All repositories". Your needs may vary.&lt;/li&gt;
&lt;li&gt;Editor Preference: Set to "Visual Studio Code" vs for web. Ensures the &lt;code&gt;[&amp;lt;&amp;gt; Code]&lt;/code&gt; button on repos opens VS Code on my Mac and avoids the need to click redirect in the browser.&lt;/li&gt;
&lt;li&gt;Region: I set this manually to &lt;code&gt;EastUs&lt;/code&gt; but I suspect I had no reason to do so.&lt;/li&gt;
&lt;li&gt;Added Secrets: Read below on using SSH with Ruby Bundler or NPM packages.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Codespaces Extension
&lt;/h3&gt;

&lt;p&gt;Install the &lt;a href="https://marketplace.visualstudio.com/items?itemName=GitHub.codespaces"&gt;GitHub Codespaces&lt;/a&gt; for VS Code. I think this is done for you automatically if you are using the web-based editor. Installing it on your host machine's VS Code will mean you can use Codespaces without ever browsing to GitHub.com and clicking on a &lt;code&gt;[&amp;lt;&amp;gt; Code]&lt;/code&gt; button.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--pCXFXuQM--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/d97miz9543w1h22fdp0v.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--pCXFXuQM--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/d97miz9543w1h22fdp0v.png" alt="The Codespaces Command Pallet in VS Code provided by the Codespaces Extension" width="880" height="259"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  The Integrated Terminal
&lt;/h3&gt;

&lt;p&gt;Assuming you have setup your Dotfiles, VS Code's &lt;a href="https://code.visualstudio.com/docs/editor/integrated-terminal"&gt;integrated terminal&lt;/a&gt; should feel familiar by mirroring your host machine's prompt, aliases, and more. If your default shell is ZSH, you may need to do a few things to help Codespaces use ZSH by default vs Bash. Here are my settings for the integrated terminal now. Mind you, there was (maybe still is) &lt;a href="https://github.community/t/integrated-terminal-setting-not-respected/145625"&gt;a bug&lt;/a&gt; in VS Code where ZSH would not be respected. I have noticed in some cases Bash is used but it is easy to launch a new profile with ZSH if that happens.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="nl"&gt;"terminal.integrated.fontSize"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;14&lt;/span&gt;&lt;span class="err"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"terminal.integrated.defaultProfile.osx"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"zsh"&lt;/span&gt;&lt;span class="err"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"terminal.integrated.defaultProfile.linux"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"zsh"&lt;/span&gt;&lt;span class="err"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Using &lt;code&gt;Command+K&lt;/code&gt; to clear the terminal's buffer is second nature to most. By default this key binding will not reach the integrated terminal. You can edit your Keyboard Shortcuts JSON file to solve for that. Below is a screen capture of the magic little button you have to press to edit that raw JSON file. Use the following snippet to fix this.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--t4jEsdyF--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/z4a77pqc3bk7zpymyzka.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--t4jEsdyF--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/z4a77pqc3bk7zpymyzka.png" alt="Super Hidden Keyboard Shortcuts JSON Edit Button" width="880" height="198"&gt;&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"key"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"cmd+k"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"command"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"workbench.action.terminal.clear"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"when"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"terminalFocus"&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Terminal visibility and placement. When working on my laptop's smaller screen, I learned that you can use &lt;code&gt;Control+~&lt;/code&gt; to toggle the visibility of the integrated terminal. However, when working at my desk and larger screen, I really want the integrated terminal to be to the right of my editor. Thanks to this &lt;a href="https://stackoverflow.com/questions/41874426/how-do-i-move-the-panel-in-visual-studio-code-to-the-right-side"&gt;this Stack Overflow&lt;/a&gt; here are convoluted steps to make this happen. Hopefully one day they will make this easier. 😅&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;At the right top of the integrated terminal, click the &lt;code&gt;+&lt;/code&gt; sign to open a 2nd terminal.&lt;/li&gt;
&lt;li&gt;Within the panel to the right, right click any of the two profiles, select &lt;code&gt;Move into Editor Area&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;Close the bottom integrated terminal with the &lt;code&gt;x&lt;/code&gt; button.&lt;/li&gt;
&lt;li&gt;Focus the editor tab at the top moved from step 2, click the &lt;code&gt;[|]&lt;/code&gt; split editor button.&lt;/li&gt;
&lt;li&gt;Close the shell tab on the left side of the screen.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  🎉 Fun Highlights
&lt;/h2&gt;

&lt;p&gt;Here are a few things I was pleasantly surprised with Codespaces' DX and how it works:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;When learning Codespaces or working on uncommitted code, you may have to rebuild your development container. Codespaces automatically maintains your present working directory, open files, etc when doing this. Amazing!&lt;/li&gt;
&lt;li&gt;You can see all your Codespaces on GitHub by navigating to &lt;a href="https://github.com/codespaces"&gt;https://github.com/codespaces&lt;/a&gt;. However, I typically use VS Code's &lt;a href="https://marketplace.visualstudio.com/items?itemName=GitHub.codespaces"&gt;extension&lt;/a&gt; to navigate, open, and disconnect.&lt;/li&gt;
&lt;li&gt;Leveraging the &lt;code&gt;CODESPACES&lt;/code&gt; environment variable set to &lt;code&gt;true&lt;/code&gt; is an easy way to integrate your existing tooling into Codespaces allowing your teams to support multiple ways to bootstrap your applications.&lt;/li&gt;
&lt;li&gt;Forwarded ports are automatically detected via the integrated terminal's STDOUT. For example, a &lt;code&gt;.bin/rails server&lt;/code&gt; will ouput whatever host/port you are using and Codespaces will see it. If needed you can use the &lt;code&gt;forwardPorts&lt;/code&gt; config for &lt;code&gt;devcontainer.json&lt;/code&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  ⚠️ Difficult Lessons
&lt;/h2&gt;

&lt;p&gt;Some hard lessons learned when dipping into the deep end of using GitHub Codespaces. If you have any to share, please drop some comments below.&lt;/p&gt;

&lt;h3&gt;
  
  
  Private Packages &amp;amp; SSH
&lt;/h3&gt;

&lt;p&gt;GitHub does a great job at providing your Codespace with a short lived &lt;code&gt;GITHUB_TOKEN&lt;/code&gt;. Most package managers including NPM and Bundler can leverage this. However, if your organization has standardized on SSH setting up your projects could be a problem.&lt;/p&gt;

&lt;p&gt;Thankfully when I reached out on Twitter, Jonathan Carter on the Codespaces team, &lt;a href="https://twitter.com/LostInTangent/status/1427053387007225861"&gt;seemed to suggest&lt;/a&gt; they may be working on a native SSH integration one day. Till then, here is the solution I came up with. This process address some sequencing issues around &lt;code&gt;devcontainer.json&lt;/code&gt;'s &lt;a href="https://code.visualstudio.com/docs/remote/devcontainerjson-reference#_lifecycle-scripts"&gt;Lifecycle Scripts&lt;/a&gt; and when your Dotfiles are installed. Credit to VS Codes &lt;a href="https://code.visualstudio.com/docs/remote/containers#_using-ssh-keys"&gt;Using SSH Keys&lt;/a&gt; guide. Also, some things here are pulled directly from the &lt;a href="https://github.com/webfactory/ssh-agent"&gt;GitHub Action&lt;/a&gt; to setup SSH. Again, thanks to Johnathan Carter for the ideas.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Create a personal Codespace secret called &lt;code&gt;PERSONAL_SSH_KEY&lt;/code&gt; by visiting this page &lt;a href="https://github.com/settings/codespaces/secrets/new"&gt;https://github.com/settings/codespaces/secrets/new&lt;/a&gt; and adding your private key, typically found in the &lt;code&gt;~/.ssh/id_rsa&lt;/code&gt; file.&lt;/li&gt;
&lt;li&gt;Add this snippet to your &lt;code&gt;postCreate&lt;/code&gt; script. It ensures GitHub is in the known hosts for SSH.
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"Adding GitHub.com keys to ~/.ssh/known_hosts"&lt;/span&gt;
&lt;span class="nb"&gt;printf&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s2"&gt;github.com ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAq2A7hRGmdnm9tUDbO9IDSwBK6TbQa+PXYPCPy6rbTrTtw7PHkccKrpp0yVhp5HdEIcKr6pLlVDBfOLX9QUsyCOV0wzfjIJNlGEYsdlLJizHhbn2mUjvSAHQqZETYP81eFzLQNnPHt4EVVUh7VfDESU84KezmD5QlWpXLmvU31/yMf+Se8xhHTvKSCZIFImWwoG6mbUoWf9nzpIoaSjB+weqqUUmpaaasXVal72J+UX2B+2RPW3RcT0eOzQgqlJL3RKrTJvdsjE3JEAvGq3lGHSZXy28G3skua2SmVi/w4yCE6gbODqnTWlg7+wC604ydGXA8VJiS5ap43JXiUFFAaQ==&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&amp;gt;&lt;/span&gt; ~/.ssh/known_hosts
&lt;span class="nb"&gt;printf&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s2"&gt;github.com ssh-dss AAAAB3NzaC1kc3MAAACBANGFW2P9xlGU3zWrymJgI/lKo//ZW2WfVtmbsUZJ5uyKArtlQOT2+WRhcg4979aFxgKdcsqAYW3/LS1T2km3jYW/vr4Uzn+dXWODVk5VlUiZ1HFOHf6s6ITcZvjvdbp6ZbpM+DuJT7Bw+h5Fx8Qt8I16oCZYmAPJRtu46o9C2zk1AAAAFQC4gdFGcSbp5Gr0Wd5Ay/jtcldMewAAAIATTgn4sY4Nem/FQE+XJlyUQptPWMem5fwOcWtSXiTKaaN0lkk2p2snz+EJvAGXGq9dTSWHyLJSM2W6ZdQDqWJ1k+cL8CARAqL+UMwF84CR0m3hj+wtVGD/J4G5kW2DBAf4/bqzP4469lT+dF2FRQ2L9JKXrCWcnhMtJUvua8dvnwAAAIB6C4nQfAA7x8oLta6tT+oCk2WQcydNsyugE8vLrHlogoWEicla6cWPk7oXSspbzUcfkjN3Qa6e74PhRkc7JdSdAlFzU3m7LMkXo1MHgkqNX8glxWNVqBSc0YRdbFdTkL0C6gtpklilhvuHQCdbgB3LBAikcRkDp+FCVkUgPC/7Rw==&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&amp;gt;&lt;/span&gt; ~/.ssh/known_hosts
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Add this snippet to your Dotfiles. It will ensure the proper SSH agent is started, if not already, and that the key environment variables are set.
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$CODESPACES&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"true"&lt;/span&gt; &lt;span class="o"&gt;]&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;then
  if&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt; &lt;span class="nt"&gt;-z&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$SSH_AUTH_SOCK&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="o"&gt;]&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;then
    &lt;/span&gt;&lt;span class="nv"&gt;RUNNING_AGENT&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="sb"&gt;`&lt;/span&gt;ps &lt;span class="nt"&gt;-ax&lt;/span&gt; | &lt;span class="nb"&gt;grep&lt;/span&gt; &lt;span class="s1"&gt;'ssh-agent -s'&lt;/span&gt; | &lt;span class="nb"&gt;grep&lt;/span&gt; &lt;span class="nt"&gt;-v&lt;/span&gt; &lt;span class="nb"&gt;grep&lt;/span&gt; | &lt;span class="nb"&gt;wc&lt;/span&gt; &lt;span class="nt"&gt;-l&lt;/span&gt; | &lt;span class="nb"&gt;tr&lt;/span&gt; &lt;span class="nt"&gt;-d&lt;/span&gt; &lt;span class="s1"&gt;'[:space:]'&lt;/span&gt;&lt;span class="sb"&gt;`&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$RUNNING_AGENT&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"0"&lt;/span&gt; &lt;span class="o"&gt;]&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;then&lt;/span&gt;
      &lt;span class="c"&gt;# Launch a new instance of the agent&lt;/span&gt;
      ssh-agent &lt;span class="nt"&gt;-s&lt;/span&gt; &amp;amp;&amp;gt; &lt;span class="nv"&gt;$HOME&lt;/span&gt;/.ssh/ssh-agent
    &lt;span class="k"&gt;fi
    &lt;/span&gt;&lt;span class="nb"&gt;eval&lt;/span&gt; &lt;span class="sb"&gt;`&lt;/span&gt;&lt;span class="nb"&gt;cat&lt;/span&gt; &lt;span class="nv"&gt;$HOME&lt;/span&gt;/.ssh/ssh-agent&lt;span class="sb"&gt;`&lt;/span&gt;
  &lt;span class="k"&gt;fi&lt;/span&gt;
  &lt;span class="c"&gt;# Add my SSH key.&lt;/span&gt;
  &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt; &lt;span class="nt"&gt;-n&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;PERSONAL_SSH_KEY&lt;/span&gt;&lt;span class="p"&gt;+1&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="o"&gt;]&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;then
    &lt;/span&gt;ssh-add - &lt;span class="o"&gt;&amp;lt;&amp;lt;&amp;lt;&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;PERSONAL_SSH_KEY&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
  &lt;span class="k"&gt;fi
fi&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In order to see this all come together with our Docker in Docker Lambda patterns, please read the &lt;a href="https://dev.to/aws-heroes/serverless-docker-patterns-4g1p"&gt;Serverless Docker Patterns&lt;/a&gt; article in this series where we describe how to use the &lt;code&gt;SSH_AUTH_SOCK&lt;/code&gt; in a cross platform way for Mac &amp;amp; Linux.&lt;/p&gt;

&lt;h3&gt;
  
  
  AWS CLI
&lt;/h3&gt;

&lt;p&gt;For our Lambda projects we use Docker in Docker patterns where both the AWS &amp;amp; SAM CLIs are pre-installed on the development image. However, you may need the AWS CLI installed on the developer's host machine too. In this case, Codespaces. Here is a short snippet that you can use in your &lt;code&gt;postCreate&lt;/code&gt; script.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"Installing AWS CLI"&lt;/span&gt;
&lt;span class="nb"&gt;pushd&lt;/span&gt; /tmp
curl &lt;span class="s2"&gt;"https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip"&lt;/span&gt; &lt;span class="nt"&gt;-o&lt;/span&gt; &lt;span class="s2"&gt;"awscliv2.zip"&lt;/span&gt;
unzip &lt;span class="nt"&gt;-qq&lt;/span&gt; awscliv2.zip
&lt;span class="nb"&gt;sudo&lt;/span&gt; ./aws/install
&lt;span class="nb"&gt;rm&lt;/span&gt; &lt;span class="nt"&gt;-rf&lt;/span&gt; awscliv2.zip ./aws
&lt;span class="nb"&gt;popd&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Docker in Docker
&lt;/h3&gt;

&lt;p&gt;I've said this before but cross platform Docker in Docker is really hard. This series aims to talk about most of them, but one I learned the hard way is that sometimes the pain comes from the ones you love... in this case AWS SAM. The team is doing some amazing work but I ran into a few issues where Docker in Docker patterns have broken down. Read here for details.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://github.com/aws/aws-sam-cli/issues/2837#issuecomment-845487064"&gt;No Response from Invoke Container for Lambda Inside docker-compose #2837&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/aws/aws-sam-cli/issues/921#issuecomment-907859353"&gt;Watch Option for SAM Build Command #921&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  🚂 Full Lamby Example
&lt;/h2&gt;

&lt;p&gt;Assuming the other patterns were in place like various &lt;code&gt;postCreate&lt;/code&gt; hooks for SSH, using GitHub Codespaces with your already Docker'ized project is super easy. Here is a complete &lt;code&gt;.devcontainer/devcontainer.json&lt;/code&gt; file for one of our projects. Again, see the &lt;a href="https://dev.to/aws-heroes/serverless-docker-patterns-4g1p"&gt;Serverless Docker Patterns&lt;/a&gt; related post on how we are using &lt;code&gt;COMPOSE_FILE&lt;/code&gt; for Mac filesystem performance and why it would be needed here.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"my-application"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"forwardPorts"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;4020&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"remoteEnv"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"COMPOSE_FILE"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"docker-compose.yml"&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"postCreateCommand"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"./.devcontainer/postCreate"&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In fact, none of this would be needed for a starter application! Give it a try. Go through our &lt;a href="https://lamby.custominktech.com/docs/quick_start"&gt;Lamby Quick Start&lt;/a&gt; guide, commit your project to GitHub... and give Codespaces a try!&lt;/p&gt;

&lt;h2&gt;
  
  
  🔐 Security Questions
&lt;/h2&gt;

&lt;p&gt;The Codespaces team was kind enough to write their own &lt;a href="https://docs.github.com/en/codespaces/codespaces-reference/security-in-codespaces"&gt;Security in Codespaces&lt;/a&gt; documentation. I'll highlight their introduction below:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Codespaces is designed to be security hardened by default. Consequently, you will need to ensure that your software development practices do not risk reducing the security posture of your codespace.&lt;/p&gt;

&lt;p&gt;This guide describes the way Codespaces keeps your development environment secure and provides some of the good practices that will help maintain your security as you work. As with any development tool, remember that you should only open and work within repositories you know and trust.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Good stuff! Security is a shared responsibility and it appears GitHub is doing their part. Please read over the full documentation for more information, but here are a few things I paid special attention to.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://docs.github.com/en/codespaces/managing-codespaces-for-your-organization/reviewing-your-organizations-audit-logs-for-codespaces"&gt;Audit Logs&lt;/a&gt;: Are generated and can be queried.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://docs.github.com/en/codespaces/managing-codespaces-for-your-organization/managing-encrypted-secrets-for-your-repository-and-organization-for-codespaces"&gt;Organization &amp;amp; User Secrets&lt;/a&gt;: Built on the &lt;a href="https://libsodium.gitbook.io/doc/public-key_cryptography/sealed_boxes"&gt;same technology&lt;/a&gt; GitHub draws a line between GitHub standard org/user secrets with the Codespace ones. Again, they can be set at the organization, repository, or user. Providing an immense amount of control and security layers.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://docs.github.com/en/codespaces/customizing-your-codespace/personalizing-codespaces-for-your-account#dotfiles"&gt;Dotfiles&lt;/a&gt;: Remind users that these are public repositories! Tho possible to encrypt secrets, I personally recommend keeping them basic to aliases and functions.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://docs.github.com/en/codespaces/codespaces-reference/security-in-codespaces#isolated-networking"&gt;Secure Networking&lt;/a&gt;: Authenticated via GitHub via temporary tokens. Forwarding ports for web servers is done securely over the network between the host. Nothing is public by default.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  🔮 What is Coming?
&lt;/h2&gt;

&lt;p&gt;As mentioned above, I would love to see a native SSH solution. For now, the workarounds are minimal and feel secure with GitHub Secrets and Codespaces integration.&lt;/p&gt;

&lt;p&gt;In their introductory &lt;a href="https://github.blog/2021-08-11-githubs-engineering-team-moved-codespaces/"&gt;blog article&lt;/a&gt;, the GitHub team put a lot of emphasis on prebuilds ensuring that each Codespaces development environment was super fast to setup. This was critical for their team and as of now Gitpod is making a clear distinction this is a &lt;a href="https://www.gitpod.io/gitpod-vs-github-codespaces"&gt;key differentiator&lt;/a&gt; for them. I suspect prebuilds are coming soon. 🤔&lt;/p&gt;

&lt;h2&gt;
  
  
  📚 Resources
&lt;/h2&gt;

&lt;p&gt;Thanks so much for reading! I would love to hear if you found this article helpful or what your organization may be doing with GitHub Codespaces. 💕&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://docs.github.com/en/codespaces"&gt;GitHub Codespaces&lt;/a&gt; - Blazing fast cloud
developer environments with Visual Studio Code backed by high performance VMs that start in seconds.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://github.blog/2021-08-11-githubs-engineering-team-moved-codespaces/"&gt;GitHub’s Engineering Team has moved to Codespaces&lt;/a&gt; - Great description of the business needs for easy development environments. Common for most orgs.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://lamby.custominktech.com/docs/quick_start"&gt;Getting Started with Rails on Lambda&lt;/a&gt; - An quick start guide using Docker for development with GitHub &amp;amp; Codespaces.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://code.visualstudio.com/docs/remote/remote-overview"&gt;VS Code Remote Development&lt;/a&gt; - The architecture behind GitHub Codespaces.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://www.gitpod.io"&gt;Gitpod&lt;/a&gt; - Spin up fresh, automated dev environments
for each task, in the cloud, in seconds.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://www.gitpod.io/blog/ephemeral-dev-envs"&gt;DevX Digest: The Rise of Ephemeral Developer Environments&lt;/a&gt; - Great post by &lt;a href="https://twitter.com/paulienuh"&gt;Pauline P. Narvas&lt;/a&gt; on where cloud-based dev environments are headed.&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>codespaces</category>
      <category>containers</category>
      <category>serverless</category>
      <category>lambda</category>
    </item>
    <item>
      <title>AWS Systems Manager (SSM) Cross Region Replication</title>
      <dc:creator>Katherine (she/her)</dc:creator>
      <pubDate>Wed, 12 Jan 2022 20:07:20 +0000</pubDate>
      <link>https://forem.com/customink/aws-systems-manager-ssm-cross-region-replication-3ah3</link>
      <guid>https://forem.com/customink/aws-systems-manager-ssm-cross-region-replication-3ah3</guid>
      <description>&lt;h2&gt;
  
  
  Overview of SSM Replication
&lt;/h2&gt;

&lt;p&gt;This blog post will explain in detail how to set up cross region replication for &lt;a href="https://docs.aws.amazon.com/systems-manager/latest/userguide/systems-manager-parameter-store.html"&gt;AWS Parameter Store&lt;/a&gt;. As of the writing of this blog post, AWS does not have a native feature for replicating parameters in SSM. If you are using SSM Parameter Store instead of Secrets Manager and are seeking a way to replicate parameters for DR/Multi-Region purposes, this post may help you.&lt;/p&gt;

&lt;p&gt;Diagram showing the architecture setup:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--G7w26Z4I--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/kwcx4ega1k3xtmyw2127.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--G7w26Z4I--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/kwcx4ega1k3xtmyw2127.png" alt="Architecture setup for ssm replication" width="800" height="396"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Serverless Framework Setup
&lt;/h2&gt;

&lt;p&gt;I used &lt;a href="https://github.com/customink/lamby-cookiecutter"&gt;Lamby&lt;/a&gt; cookie-cutter as the framework for this Lambda, which made a lot of the initial set up very easy! Please take a look at that site &amp;amp; set up your serverless framework for the work to be done ahead. I will first share the CloudFormation template used, then share the code that makes the replication work as well as plain in detail what's happening.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;AWSTemplateFormatVersion: '2010-09-09'
Transform: AWS::Serverless-2016-10-31
Description: AWS SSM regional replication for multi-region setup
Parameters:
  StageEnv:
    Type: String
    Default: dev
    AllowedValues:
      - test
      - dev
      - staging
      - prod
Mappings:
  KmsMap:
    us-east-1:
      dev: 'arn:aws:kms:us-east-1:123456:key/super-cool-key1'
      staging: 'arn:aws:kms:us-east-1:123456:key/super-cool-key2' 
      prod: arn:aws:kms:us-east-1:123456:key/super-cool-key3'
    us-east-2:
      dev: 'arn:aws:kms:us-east-2:123456:key/super-cool-key1'
      staging: 'arn:aws:kms:us-east-2:123456:key/super-cool-key1'
      prod: 'arn:aws:kms:us-east-2:123456:key/super-cool-key1' 
  DestinationMap:
    us-east-1: 
      target: "us-east-2"
Resources:
  ReplicationQueue:
    Type: AWS::SQS::Queue
    Properties:
      QueueName: !Sub 'SSM-SQS-replication-${StageEnv}-${AWS::Region}'
      VisibilityTimeout: 1000
  LambdaRegionalReplication:
    Type: AWS::Serverless::Function
    Properties:
      CodeUri: .
      Handler: lib/ssm_regional_replication.handler
      Runtime: ruby2.7
      Timeout: 900
      MemorySize: 512
      Environment:
        Variables:
          STAGE_ENV: !Ref StageEnv
          TARGET_REGION: !FindInMap [DestinationMap, !Ref AWS::Region, target]
          SKIP_SYNC: 'skip_sync'
      Events:
        InvokeFromSQS:
          Type: SQS
          Properties:
            Queue: {"Fn::GetAtt" : [ "ReplicationQueue", "Arn" ]}
            BatchSize: 1
            Enabled: true
        ReactToSSM:
          Type: EventBridgeRule
          Properties:
            Pattern:
              detail-type:
                - Parameter Store Change 
              source:
                - aws.ssm
      Policies:
      - Statement:
        - Sid: ReadSSM
          Effect: Allow
          Action:
          - ssm:GetParameter
          - ssm:GetParameters
          - ssm:PutParameter
          - ssm:DeleteParameter
          - ssm:AddTagsToResource
          - ssm:ListTagsForResource
          Resource: 
          - !Sub "arn:aws:ssm:*:${AWS::AccountId}:parameter/*"
      - Statement:
        - Sid: DecryptSSM
          Effect: Allow
          Action:
          - kms:Decrypt
          - kms:Encrypt
          Resource: 
          - !FindInMap [KmsMap, us-east-1, !Ref StageEnv]
          - !FindInMap [KmsMap, us-east-2, !Ref StageEnv]
  LambdaFullReplication:
    Type: AWS::Serverless::Function
    Properties:
      CodeUri: .
      Handler: lib/ssm_full_replication.handler
      Runtime: ruby2.7
      Timeout: 900
      MemorySize: 512
      Environment:
        Variables:
          STAGE_ENV: !Ref StageEnv
          TARGET_REGION: !FindInMap [DestinationMap, !Ref AWS::Region, target]
          SKIP_SYNC: 'skip_sync'
      Events:
        DailyReplication:
          Type: Schedule
          Properties:
            Description: Cronjob to run replication at 9:30am EST every Wednesday (cron is UTC)
            Enabled: True 
            Name: DailySSMReplication
            Schedule: "cron(30 13 ? * 4 *)"
      Policies:
      - Statement:
        - Sid: SQSPerms
          Effect: Allow
          Action:
          - sqs:SendMessage
          Resource: 
          - !Sub "arn:aws:sqs:*:${AWS::AccountId}:SSM-SQS-replication-*"
      - Statement:
        - Sid: ReadSSM
          Effect: Allow
          Action:
          - ssm:GetParameter
          - ssm:GetParameters
          - ssm:PutParameter
          - ssm:AddTagsToResource
          - ssm:ListTagsForResource
          - ssm:DescribeParameters
          Resource: 
          - !Sub "arn:aws:ssm:*:${AWS::AccountId}:*"
          - !Sub "arn:aws:ssm:*:${AWS::AccountId}:parameter/*"
      - Statement:
        - Sid: DecryptSSM
          Effect: Allow
          Action:
          - kms:Decrypt
          - kms:Encrypt
          Resource: 
          - !FindInMap [KmsMap, us-east-1, !Ref StageEnv]
          - !FindInMap [KmsMap, us-east-2, !Ref StageEnv]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The above template does a number of things. It creates my SQS queue, a regional replication lambda that is event based, and a full replication lambda that is cron based. Under the 'Mappings' section I have "KmsMap" which maps to the aws/ssm KMS keys. If you use other keys for your SSM entries, enter that value here. If you use &lt;em&gt;many&lt;/em&gt; keys across your SSM parameters, simply add them to the lambda properties, example here:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;      - Statement:
        - Sid: DecryptSSM
          Effect: Allow
          Action:
          - kms:Decrypt
          - kms:Encrypt
          Resource: 
          - !FindInMap [KmsMap, us-east-1, !Ref StageEnv]
          - !FindInMap [KmsMap, us-east-2, !Ref StageEnv]
          - 'arn:aws:kms:us-east-1:123456:key/my-managed-key1' 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The other 'Mapping', &lt;code&gt;DestinationMap&lt;/code&gt;, sets up my source and target region. My original SSM parameters are in &lt;code&gt;us-east-1&lt;/code&gt;, so the target is &lt;code&gt;us-east-2&lt;/code&gt; in this case. The SQS queue holds all of the parameters from the &lt;code&gt;LambdaFullReplication&lt;/code&gt;, since lambdas cannot run indefinitely, there's a high chance the function won't finish before going through all of your parameters. This &lt;code&gt;LambdaFullReplication&lt;/code&gt; function sends the parameters to the SQS queue, where the &lt;code&gt;LambdaRegionalReplication&lt;/code&gt; then performs the put action to the destination region. The &lt;code&gt;VisibilityTimeout&lt;/code&gt; is set to &lt;code&gt;1000&lt;/code&gt; to allow some wiggle room for the lambda (&lt;code&gt;900&lt;/code&gt;). &lt;br&gt;
The full replication lambda runs every Wednesday (or whatever frequency you'd like) for a few reasons: &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;to do the initial get/put for the parameters and&lt;/li&gt;
&lt;li&gt;to catch any parameters that have/delete the skip_sync tag &lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;I will discuss the &lt;code&gt;skip_sync&lt;/code&gt; tag in detail when discussing the code. The regional replication lambda runs when there's an entry in the SQS queue that has to be processed, or anytime there's a change to a parameter, driven by event based actions.&lt;/p&gt;
&lt;h2&gt;
  
  
  Code Setup
&lt;/h2&gt;

&lt;p&gt;Next I will discuss and share the Ruby code that actually does the work. There are three Ruby files that make this lambda function, &lt;code&gt;parameter_store.rb&lt;/code&gt;, &lt;code&gt;ssm_regional_replication.rb&lt;/code&gt;, and &lt;code&gt;ssm_full_replication.rb&lt;/code&gt;. I will share the code along with the comments around what is happening in the file.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;require 'aws-sdk-ssm'
# Create ParameterStore class, to be shared by both regional
# and full replication lambda.
class ParameterStore
  # The parameter store class creates instance variables with "attr_accessor" 
  # for the initial client, response, name, and tag_list. 
  attr_accessor :client, :response, :name, :tag_list
  # Initialize method for hash
  # this allows the client &amp;amp; name instance vars
  # to be used outside of the init method
  def initialize(h)
    self.client = h[:client] # this gets the client key from CloudWatch metrics
    self.name = h[:name] # gets the name of the param &amp;amp; assigns it to name instance var
  end
  # this method takes the client &amp;amp; name args from prev method.
  def self.find_by_name(client, name) 
    # create new client connection &amp;amp; name from private `find_by_name` method
    new(client: client, name: name).find_by_name(name)
  end
  private 
  def find_by_name(name)
    # set begin block in order for the get_parameter call to
    # loop through all of the parameters
    begin
      # declare instance variable with self.response
      # set to the AWS client connection calling
      # get_parameter method via Ruby CLI
      # extract the name &amp;amp; with_decruption options set
      self.response = client.get_parameter({
        name: name,
        with_decryption: true,
      })
      # rescue to look for AWS SSM throttling errors.
      # take the exception below, and place in variable "e"
    rescue Aws::SSM::Errors::ThrottlingException =&amp;gt; e 
      p "Sleeping for 60 seconds while getting parameters."
      sleep(60)
      # will re-run what is in begin block
      retry
    end
    self
  end

  # creates a `tag_list` instance var
  # `||=` operator is Ruby "short-circuit" which means
  # if `tag_list` is set, then skip this part,
  # if not set, then set it to what is on the right side of equals sign.
  # the purpose is to set the tag_list var equal to
  # the response from the `list_tags_for_resource`¹ 
  # which contains resource_type set to Parameter, and the 
  # resource_id set to name
  def tag_list
    @tag_list ||= client.list_tags_for_resource({resource_type: 'Parameter', resource_id: name})
  end
  # checks the `tag_list` method above &amp;amp; runs a 
  # select method on the tag_list hash
  # loops to see if there is a key with the `key` value in hash
  # and checks presence of a `skip_sync` tag with the `.any?` 
  # boolean method. If this exists, then the lambda function
  # will not run and the replication will not occur.
  # If this does not exist, then it proceeds. 
  # You may want to skip syncing for regional specific resources. 
  # If you want to replicate an initial skip_sync param, simply
  # remove the tag in question and on the next run, the param will sync`
  def skip_sync?
    tag_list[:tag_list].select {|key| key[:key] == $skip_tag }.any?
  end
  # Calls the Ruby `put_parameters` method on the `client_target` parameter.
  # `put_parameter` replicates name, value, type, and overwrite. This method
  # also adds the tags copied over from the tag_list method to resources by name.
  def sync_to_region(client_target)
    client_target.put_parameter({
      name: response['parameter']['name'], # required
      value: response['parameter']['value'], # required
      type: response['parameter']['type'], # accepts String, StringList, SecureString
      overwrite: true,
    })
    client_target.add_tags_to_resource({resource_type: 'Parameter', resource_id: name, tags: tag_list.to_h[:tag_list]})    
  end
end
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;p&gt;The next file I will discuss is the &lt;code&gt;ssm_full_replication.rb&lt;/code&gt; piece of the code. As you may gather from the name, this is responsible for full replication.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# this pulls the AWS sdk gem
require 'aws-sdk-ssm'
require 'aws-sdk-sqs'
require_relative 'parameter_store'
# Declare global variables which are set to the
# respective values from CloudFormation template.
$target_region = ENV['TARGET_REGION'] or raise "Missing TARGET_REGION variable."
$skip_tag = ENV['SKIP_SYNC'] or raise "Missing skip_sync tag."
$stage_env = ENV['STAGE_ENV']
# method set to us-east-1 for source region. 
# var `sqs_client` set to new SQS client connection in target region
# var `sts_client` set to new STS client conn in source region.
# call `send_message` on `sqs_client` var with queue_url &amp;amp; message_body as params.
def send_params_to_sqs(name)
  region = "us-east-1"
  sqs_client = Aws::SQS::Client.new(region: $target_region)
  sts_client = Aws::STS::Client.new(region: region)
  sqs_client.send_message(
    queue_url: "https://sqs.#{region}.amazonaws.com/#{sts_client.get_caller_identity.account}/SSM-SQS-replication-#{$stage_env}-#{region}",
    message_body: name
  )
end
# sets new SSM client connection in source region
# and new SSM client_target connection in target region
def handler(event:, context:)
  client = Aws::SSM::Client.new
  client_target = Aws::SSM::Client.new(region: $target_region)
  # next_token set to nil, which is important at start of lambda func
  next_token = nil
  # loop starts with begin block which
  # runs before the rest of the code in method.
  loop do 
    begin
      # describe_batch is set to value from 
      # describe_parameters² call on the client variable.
      @describe_batch = client.describe_parameters({
        # parameter_filter limits request results to what we need
        parameter_filters: [
          {
            key: "Type",
            values: ["String", "StringList", "SecureString"]
          },
        ],
        # next_token is set to next set of items to return
        next_token: next_token,
      })
      # describe_batch var calls iterative loop and
      # sends param name to send_params_to_sqs method
      @describe_batch.parameters.each do |item|
        send_params_to_sqs(item.name)
      end
      # break means that func will end if the next_token value is empty.
      break if @describe_batch.next_token.nil?
      next_token = @describe_batch.next_token
      # exception handling. it looks for this error message, and this is how it will handle, by pausing for 60 seconds.
    rescue Aws::SSM::Errors::ThrottlingException
      p "Sleeping for 60 seconds while describing parameters."
      sleep(60)
    end
  end
end
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;p&gt;The last file to share is the &lt;code&gt;ssm_regional_replication.rb&lt;/code&gt; file. This file is event based and does the regional replication.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# this pulls the AWS sdk gem
require 'aws-sdk-ssm'
require_relative 'parameter_store'
# Global vars for file
$target_region = ENV['TARGET_REGION'] or raise "Missing TARGET_REGION variable."
$skip_tag = ENV['SKIP_SYNC'] or raise "Missing skip_sync tag."
# CloudWatch sends events in a specific format compared to SQS triggered lambdas
# so this method grabs the values from CloudWatch handles both formats.
def massage_event_data(event)
  # pull out values from a cloudwatch invocation
  operation = event.fetch('detail', {})['operation']
  name      = event.fetch('detail', {})['name']
  return operation,name if operation &amp;amp;&amp;amp; name
  operation = 'Update' 
  name      = event.fetch('Records', []).first['body']
  return operation,name 
end
def handler(event:, context:)
  # set vars called operation and name. output from prev. method.
  # create new client &amp;amp; target vars for SSM
  operation,name = massage_event_data(event)
  client = Aws::SSM::Client.new
  client_target = Aws::SSM::Client.new(region: $target_region)
  # this logic runs event based code. If the operation from 
  # the CloudWatch metrics is equal to either update or create
  # the ps var uses the ParameterStore find_by_name class method
  # and passes the client * name.
  if operation == 'Update' || operation == 'Create'
    ps = ParameterStore.find_by_name(client, name)
    # if the ps var has a skip_sync tag, then the CloudWatch logs
    # you will get what's in the puts string. if there is no tag
    # it syncs to target region.
    if ps.skip_sync?
      puts "This function has been opted out, not replicating parameter."
    else
      ps.sync_to_region(client_target)
    end
  # if the operation is delete in the source region, then the delete_parameter method is called on the
  # client_target and it's also deleted from the target_region to ensure parity.
  elsif operation == 'Delete'
    response = client_target.delete_parameter({
      name: name, # required. go into event, reference the detail key, and the value name
    })
  end
end
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;References to AWS API docs page:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;a href="https://docs.aws.amazon.com/sdk-for-ruby/v3/api/Aws/SSM/Client.html#list_tags_for_resource-instance_method"&gt;list_tags_for_resource-instance_method&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.aws.amazon.com/sdk-for-ruby/v3/api/Aws/SSM/Client.html#describe_parameters-instance_method"&gt;describe_parameters&lt;/a&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;If you want to be sure that there are no missed variables, you can always set up a CloudWatch alarm on if your lambda has any failed invocations or if your SQS queue isn't sending any messages. I hope that this has helped others who are looking for a way to replicate SSM parameters in AWS from one region to another. That's the end of the code, I know it is a lot to digest, so if you have any questions please leave a comment and I'll do my best to follow up.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>ssm</category>
      <category>lambda</category>
      <category>replication</category>
    </item>
  </channel>
</rss>
