<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Chris Gradwohl</title>
    <description>The latest articles on Forem by Chris Gradwohl (@cgradwohl).</description>
    <link>https://forem.com/cgradwohl</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/cgradwohl"/>
    <language>en</language>
    <item>
      <title>DynamoDB Key Partition Strategies for SaaS</title>
      <dc:creator>Chris Gradwohl</dc:creator>
      <pubDate>Tue, 15 Mar 2022 18:36:17 +0000</pubDate>
      <link>https://forem.com/courier/dynamodb-key-partition-strategies-for-saas-2o35</link>
      <guid>https://forem.com/courier/dynamodb-key-partition-strategies-for-saas-2o35</guid>
      <description>&lt;p&gt;&lt;a href="https://aws.amazon.com/dynamodb/" rel="noopener noreferrer"&gt;Amazon DynamoDB&lt;/a&gt; is a fully managed NoSQL database service built for scalability and high performance. It’s one of the most popular databases used at SaaS companies, including &lt;a href="https://www.courier.com/" rel="noopener noreferrer"&gt;Courier&lt;/a&gt;. We selected DynamoDB for the same reasons as everyone else: autoscaling, low cost, zero down time. However, at scale, DynamoDB can present serious performance issues.&lt;/p&gt;

&lt;p&gt;SaaS applications commonly follow a multi-tenant architecture, which means every customer receives a single instance of the software. At scale, this can often lead to hotkey problems due to an uneven partitioning of data in Amazon DynamoDB, which can be resolved with two solutions that will allow the system to scale. When using Amazon DynamoDB for a multi tenant solution, you need to know how to effectively partition the tenant data in order to prevent performance bottlenecks as the application scales over time.&lt;br&gt;
This article discusses a potential problem that early stage SaaS companies face when they reach hyper growth and two solutions that can be used to tackle the subsequent challenges with Amazon DynamoDB.&lt;/p&gt;

&lt;h2&gt;
  
  
  A Naive Partition Key and the Noisy Neighbor Problem
&lt;/h2&gt;

&lt;p&gt;During heavy traffic spikes, it is possible to experience several read-and-write throttling exceptions being thrown from DynamoDB. After further investigation, it should be clear that the throttled DynamoDB errors are correlated to large spike’s in request traffic.&lt;br&gt;
To understand why, we need to take a look at how a DynamoDB item is stored and what hard limits the service enforces. First it is important to remember that DynamoDB stores your data across multiple partitions. Every item in your DynamoDB table will contain a primary key that will include a partition key. This partition key determines the partition on which that item will live. Finally, we must note that each partition can support a maximum of 3,000 read capacity units (RCUs) or 1,000 write capacity units (WCUs). If a “hot” partition exceeds these hard service limits, then the overall performance of the table can degrade.&lt;/p&gt;

&lt;p&gt;The first time developing with DynamoDB, it’s often tempting to implement a data model that uses the tenant id as its partition key, which introduced a “noisy neighbor problem”. When one tenant (or a collection of tenants) makes a high volume of requests, the system would fetch a high volume of records from their partition. In the case of rapid user growth, it is likely that you will see significant spikes in request volume from certain tenants. Therefore, this spike in requests would ultimately throttle the entire table, which would slow down the system and potentially impact all users.&lt;/p&gt;

&lt;p&gt;If you’ve identified this issue within your DynamoDB instance, the next step is to look for solutions to overcome the performance bottlenecks or risk downtime on the application. While the tenant id is a natural, yet naive, partition key design, it simply cannot not scale due to the aforementioned service limits. Therefore, you need a new partition key design that can support much higher throughput. There are two ideal strategies for multi-tenant data modeling with DynamoDB. Both key design strategies support a strict set of access patterns and should be implemented based on your applications requirements.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Random Partition Key Strategy
&lt;/h2&gt;

&lt;p&gt;The first and most obvious strategy to increase throughput, is to use a random partition key. For tenant data that is not frequently updated, this strategy allows us to basically side step write throughput concerns all together. As an example consider an application that needs to store a tenant’s incoming requests into a Dynamo table. A partition key would have fairly high read and write throughput, assuming the request_id is random and of high cardinality. Each request essentially gets its own partition, and therefore we can achieve 3000 individual request item reads per second (assuming the item is 4KB or less). This is extremely high read throughput for an entity like an api request, and serves our use case well.&lt;/p&gt;

&lt;p&gt;The major downside to this partition key design is that it is impossible to fetch all items for a given, which is a common access pattern for multi-tenant applications. Yet for use cases where items do not require many updates, a random partition key provides essentially unlimited write throughput and very acceptable read throughput.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Sharded Partition Key Strategy
&lt;/h2&gt;

&lt;p&gt;To be able to retrieve all items for a given tenant, you need to split a tenants partition into multiple smaller partitions or shards and distribute their data evenly across those shards. This partition sharding technique requires a few important characteristics. At item write time, you need to be able to compute a shard key time from a given range. The sharding function needs to have high cardinality and be well distributed. If it is not, then you will still end up with a few hot partitions. The shard range and item size will ultimately determine your final throughput. Finally at item read, you iterate across the determined shard range of a tenants partition, and retrieve all items for the given tenant.&lt;/p&gt;

&lt;p&gt;For example, let’s say we want to be able to support 10,000 writes per second for a given tenant. If we can guarantee that each item is 4KB or less, and we assume a shard range of 10, then we can use a simple random number generator, given our chosen range, to &lt;/p&gt;

&lt;p&gt;&lt;a href="//images.ctfassets.net/z7iqk1q8njt4/5LtLYwHyKu6KdSLRKmCTRx/90dd56ffee45963eb2f830dcf10631d5/dynamodb-partion-key-1.png" class="article-body-image-wrapper"&gt;&lt;img src="//images.ctfassets.net/z7iqk1q8njt4/5LtLYwHyKu6KdSLRKmCTRx/90dd56ffee45963eb2f830dcf10631d5/dynamodb-partion-key-1.png" alt="dynamodb-partion-key-1"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Then at item write time, we simply compute the shard for the tenant and append it to the partition key.&lt;/p&gt;

&lt;p&gt;&lt;a href="//images.ctfassets.net/z7iqk1q8njt4/1iX9LcVl0nXJsAvN6wbX46/27faaf0852025dc5e347ccb9b5be9dff/dynamodb-partion-key-2.png" class="article-body-image-wrapper"&gt;&lt;img src="//images.ctfassets.net/z7iqk1q8njt4/1iX9LcVl0nXJsAvN6wbX46/27faaf0852025dc5e347ccb9b5be9dff/dynamodb-partion-key-2.png" alt="dynamodb-partion-key-2"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Additional Considerations
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Consider access patterns and application requirements first&lt;/li&gt;
&lt;li&gt;Keep item size under 4KB, and off load payload to S3&lt;/li&gt;
&lt;li&gt;Beware of GSI’s and their ability to throttle the table&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>saas</category>
      <category>dynamodb</category>
      <category>keypartition</category>
    </item>
    <item>
      <title>From MVP to Production Ready With Serverless</title>
      <dc:creator>Chris Gradwohl</dc:creator>
      <pubDate>Mon, 07 Jun 2021 23:19:29 +0000</pubDate>
      <link>https://forem.com/courier/from-mvp-to-production-ready-with-serverless-42pg</link>
      <guid>https://forem.com/courier/from-mvp-to-production-ready-with-serverless-42pg</guid>
      <description>&lt;p&gt;Having been at startups my entire career, I’ve encountered the dichotomy between speed and scale when building software products.The usual attitude entrepreneurs take when building the first iterations of their products is “...we aren’t anywhere close to facing problems of scale, so let’s worry about that when we get there.” This first version of the software is built and shipped fast, and it’s only a matter of time before engineers realize that they simply don’t have the foundation to iterate quickly. Inevitably, limitations within their own infrastructure causes slow development cycles, impossible deadlines, and too much stress to maintain creativity and functionality. Trust me, I have been there. &lt;/p&gt;

&lt;p&gt;In a startup, it is difficult, if not impossible, to find the resources necessary to solve these problems at scale. I have found that Serverless is an excellent response to this challenge. I think Jeremy Daly has summarized it nicely.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;“Serverless gives us the power to focus on delivering value to our customers without worrying about the maintenance and operations of the underlying compute resources.”&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In this post, I want to explore some of our favorite Serverless stories from Courier, review some Serverless basics, and explore how Serverless has empowered our team to accomplish more with less. Perhaps through these musings you can gain a better understanding of the Serverless landscape and determine if it's the right approach for your next project or startup.&lt;/p&gt;

&lt;h2&gt;
  
  
  60 Days to Monetization
&lt;/h2&gt;

&lt;p&gt;When I joined Courier, I was intrigued about why founder Troy Goode decided to go with Serverless, since it was a relatively new technology, with a small community of active developers. Upon asking, he said he was looking for “the speed of Ruby on Rails or Django with the scale of Kubernetes” without having to choose one over the other. Serverless framework was a perfect fit.Troy, as a team of one, was able to build Courier’s powerful send pipeline, pitch potential customers, and actually land a paying account within 60 days of development. This was incredibly exciting for me and validated the idea that a small team can become production ready extremely quickly. &lt;/p&gt;

&lt;p&gt;What’s even more impressive is that the core design of our send pipeline has remained largely unchanged in the last 18 months. This has allowed us to focus on specific customer use cases and not the underlying infrastructure. This foundation has served us well, allowing us to continue to develop at a rapid pace and respond to customer feedback.&lt;/p&gt;

&lt;h2&gt;
  
  
  S3 is Our Friend
&lt;/h2&gt;

&lt;p&gt;At Courier, we are big fans of S3. With all the new features and services that seem to explode out of reInvent each year, S3 doesn’t get the love it deserves. From its guaranteed uptime of 99.9%, its dead simple API, and low cost, what’s not to love!&lt;/p&gt;

&lt;p&gt;One of my favorite design patterns that I picked up at Courier is the &lt;strong&gt;Web Service to S3&lt;/strong&gt; pattern due to its flexibility and simplicity.&lt;br&gt;
&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx0wnl7ny6rc39mqtkbn7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx0wnl7ny6rc39mqtkbn7.png" alt="AWS architecture image 1" width="800" height="252"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This pattern is an excellent fit for when you need to manage time consuming processing but don’t want to wait for its completion. In this example, a &lt;strong&gt;Web Service&lt;/strong&gt; puts an http request onto an S3 bucket called &lt;strong&gt;RequestStore&lt;/strong&gt;. This will trigger a Lambda function called &lt;strong&gt;Worker&lt;/strong&gt;, which can then send the request to another service to be processed.&lt;/p&gt;

&lt;p&gt;This is particularly easy to configure with Serverless framework. First, you need to define an S3 bucket using cloud formation:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resources:
 Resources:
   RequestStore:
     Type: AWS::S3::Bucket
     Properties:
       AccessControl: PublicRead
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then define the lambda function with an S3 trigger event:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Worker:
 events:
   - s3:
       bucket:
         Ref: RequestStore
       event: s3:ObjectCreated:Put
 handler: handlers/worker.default
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;I love being able to reference this manifest later and visualize the system just by looking at the code.&lt;/p&gt;

&lt;p&gt;Another powerful use case for S3 is avoiding the 400KB item limit with DynamoDB. When you need to store large item attributes in Dynamo, you can store them as an object in Amazon S3 and then store the object reference in the Dynamo item.&lt;/p&gt;

&lt;p&gt;This approach has proven useful on numerous occasions at Courier, but is not without its tradeoffs. This strategy does not support transactions, therefore your application should handle any failures or errors that may occur.&lt;/p&gt;

&lt;h2&gt;
  
  
  Lambda Bottlenecks From A Dynamo Stream
&lt;/h2&gt;

&lt;p&gt;An interesting aspect of Serverless development is the ability to finely tune your services based on their usage. At Courier, this was done out of necessity after we noticed a performance issue in one of our key logging services. Here is a simplified drawing of the problematic design.&lt;br&gt;
&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fusltk09rpspt1xke5q5p.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fusltk09rpspt1xke5q5p.png" alt="AWS architecture image 2" width="800" height="505"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In this scenario, we have several services that write to a Dynamo table. This table streams batches of records to a lambda function, which writes these records to Elastic to be queried by a UI. After further investigation, we found that the lambda’s &lt;strong&gt;iterator age&lt;/strong&gt; was continuously increasing, causing a performance issue in the UI. &lt;/p&gt;

&lt;p&gt;Let’s quickly define some terms before we jump to the happy ending of this story. A lambdas &lt;strong&gt;batchSize&lt;/strong&gt; is simply the number of records to read from the event streams shard. A lambda’s &lt;strong&gt;iterator age is&lt;/strong&gt; a CloudWatch metric that measures how long it took to process the last record in the batch. Since our lambda was processing new events and the iterator age was increasing, this meant that it was taking more time to process each new record due to back pressure. &lt;strong&gt;In other words, as more records were being written to the table, it was taking these records longer to reach the UI&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;The cause was due to an increase in the product's usage, so this turned out to be both a great problem to have and one with a relatively simple solution. Depending on the Lambda’s event source, AWS allows you to define the batch size of records for the triggering lambda event. In addition to batch size, you can also define the &lt;strong&gt;parallelizationFactor&lt;/strong&gt;, which provides a multiple of concurrent lambda invocations per shard. For example if Parallelization Factor is set to 2, you can have 200 concurrent Lambda invocations to process 100 shards. Thanks to Serverless Framework, this is as simple as defining the two parameters within the event section of the lambda definition.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;LambdaWorker:
   events:
     - stream:
         type: dynamodb
         arn:
           Fn::GetAtt:
             - DynamoTable
             - StreamArn
         batchSize: 1
         parallelizationFactor: 5
   handler: handlers/lambda.worker
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;AWS and Serverless made this situation a whole lot easier to deal with thanks to the built-in CloudWatch metrics and the configurability of AWS services. After reconfiguring the lambda, we saw almost immediate back pressure relief and went about our day. &lt;/p&gt;

&lt;h2&gt;
  
  
  Green Field: Automations
&lt;/h2&gt;

&lt;p&gt;Starting a new project from scratch is exciting. Optimism is high, there are lots of creative discussions and opportunities to innovate. When I joined Courier, I was fortunate enough to lead the effort, alongside CTO Seth Carney, on a new greenfield project called Automations, which set out to allow users more control of how and when they could send messages.&lt;/p&gt;

&lt;p&gt;We set out to allow users to define an Automation from a discrete set of job definitions, that we later named steps. To process these &lt;strong&gt;steps&lt;/strong&gt; we designed a simple but effective job processing system.&lt;br&gt;
&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7a5k7jpxm2tsbm9riqr8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7a5k7jpxm2tsbm9riqr8.png" alt="AWS architecture image 3" width="800" height="308"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;First we used the trusty &lt;strong&gt;Web Service to S3&lt;/strong&gt; design pattern I talked about earlier to quickly validate the incoming automation definition, store it into S3, and return a response to the user. At this point no jobs have been processed, only validated. We don’t want the user to wait for the entire automation to execute before receiving a response. &lt;/p&gt;

&lt;p&gt;Next, the request is picked up by the &lt;strong&gt;RequestWorker&lt;/strong&gt;, where each individual job is processed in the order in which it was defined. After experimenting with other services, we chose SQS as a job processor, due to its unlimited throughput and its ability to retry messages with a DLQ. Finally, the &lt;strong&gt;JobWorker&lt;/strong&gt; is triggered with the job definition in the event payload. Its role is to execute the job based on its definition, then enqueue the next job. Defining an SQS Queue and a Lambda with an SQS trigger is similar to the &lt;strong&gt;Web Service to S3&lt;/strong&gt; pattern we defined earlier.&lt;/p&gt;

&lt;p&gt;First let’s define the Queue with CloudFormation:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resources:
 Resources:
   JobQueue:
     Type: AWS::SQS::Queue
     Properties:
       VisibilityTimeout: 60
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then define the lambda function, this time with an SQS trigger event:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;JobWorker:
 events:
   - sqs:
       arn:
         Fn::GetAtt:
           - JobQueue
           - Arn
 handler: handlers/worker.default
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Notice the funny looking syntax. This is called a cloud formation &lt;strong&gt;intrinsic function&lt;/strong&gt;, which is a way to retrieve the underlying ID of the AWS resource. You will notice that this was not required for our S3 trigger example, which is kind of a quirk of CloudFormation. Since it is difficult to keep track of what services require which intrinsic functions, I found this &lt;a href="https://theburningmonk.com/cloudformation-ref-and-getatt-cheatsheet/" rel="noopener noreferrer"&gt;amazing cheatsheet&lt;/a&gt; from Yan Cui very helpful.&lt;/p&gt;

&lt;p&gt;We were able to design, implement, and ship this architecture within a week. Considering this was implemented with a team of one, I am very proud of that accomplishment. Since then, we added many more services, features, and functionalities to Automations, but this first implementation not only kicked off a great working relationship with my colleagues, but it also proved to me the true value of a Serverless driven infrastructure.&lt;/p&gt;

&lt;p&gt;Author: Chris Gradwohl&lt;/p&gt;

&lt;p&gt;Serverless does not come without its own set of challenges and frustrations. Regardless, Serverless has become my favorite way to build products and companies. When faced with the uncertainty of the market and the need to iterate quickly I think choosing Serverless allows tremendous development speed with scale built in. What are your thoughts on Serverless? I hope you enjoyed these Serverless stories and I hope you feel empowered to dive in and build your next project with Serverless.&lt;/p&gt;

</description>
      <category>mvp</category>
      <category>serverless</category>
    </item>
    <item>
      <title>Tips and tricks to set up your Apple M1 for development</title>
      <dc:creator>Chris Gradwohl</dc:creator>
      <pubDate>Wed, 20 Jan 2021 13:56:13 +0000</pubDate>
      <link>https://forem.com/courier/tips-and-tricks-to-setup-your-apple-m1-for-development-547g</link>
      <guid>https://forem.com/courier/tips-and-tricks-to-setup-your-apple-m1-for-development-547g</guid>
      <description>&lt;p&gt;I recently joined Courier as a Software Engineer and part of the onboarding process was to set up and configure my development environment on the new M1 MacBook Pro. This task was more complicated than usual because, with the new MacBooks, Apple has replaced their long-running Intel processors with their own  &lt;a href="https://www.apple.com/mac/m1/" rel="noopener noreferrer"&gt;M1 chip&lt;/a&gt;. To help you take full advantage of the power of the new MacBooks, here are some tips and tricks I picked up when setting up my own machine.&lt;/p&gt;

&lt;h1&gt;
  
  
  Rosetta vs Native Terminal
&lt;/h1&gt;

&lt;p&gt;Command line tools are crucial  for our day-to-day workflows. However, several critical CLI tools like &lt;code&gt;nvm&lt;/code&gt; and &lt;code&gt;brew&lt;/code&gt; do not have native versions built for the new M1 architecture, so installing them on your native terminal can be frustrating.&lt;/p&gt;

&lt;p&gt;Thankfully, with Apple's translation layer &lt;a href="https://developer.apple.com/documentation/apple_silicon/about_the_rosetta_translation_environment" rel="noopener noreferrer"&gt;Rosetta 2&lt;/a&gt;, we can easily download and compile applications that were built for x86_64 and run them on Apple Silicon. I’ll explain how to duplicate the macOS native terminal and force the duplicated terminal to always run with Rosetta 2. Using this "Rosetta" terminal makes it a breeze to install our preferred tools.&lt;/p&gt;

&lt;h2&gt;
  
  
  1. Create a Rosetta Terminal
&lt;/h2&gt;

&lt;p&gt;First, duplicate the Terminal and rename it.Then,Open Finder and navigate to the Application/Utilities folder and select "Duplicate."&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F3ffg9gh9b8blsy42gaqf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F3ffg9gh9b8blsy42gaqf.png" alt="Duplicate your terminal on Apple M1 MacBook" width="800" height="619"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Rename this new terminal to something like "Rosetta-Terminal.".Now right-click on your new Rosetta Terminal and click "Get info." &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F64ly8min17g10h084gf2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F64ly8min17g10h084gf2.png" alt="Rosetta-Terminal Context Menu" width="800" height="672"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;From the “Get info” menu, select "Open using Rosetta."&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fbq75l7bkwivhjiz6c2rw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fbq75l7bkwivhjiz6c2rw.png" alt="Rosetta-Terminal Get Info" width="257" height="800"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now we have a special terminal that can be used to install our command line tools. During the install, they will be translated by Rosetta. After the install, we can use them from the native terminal.&lt;/p&gt;

&lt;h2&gt;
  
  
  2. Install your tools with the Rosetta Terminal
&lt;/h2&gt;

&lt;p&gt;Let’s install some tools! Now that we have a dedicated Rosetta Terminal, we can install our CLI tools just like we would on an Intel MacBook. In this case, I’m going to install &lt;code&gt;nvm&lt;/code&gt;, but it’s the same process for any other CLI tool you may need, e.g. Homebrew, AWS CLI, etc.&lt;/p&gt;

&lt;p&gt;First, open up "Rosetta-Terminal" using Spotlight (just hit Cmd+Space).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F5pr5du054wkf0gp2a2e4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F5pr5du054wkf0gp2a2e4.png" alt="Spotlight Search Rosetta-Terminal" width="792" height="542"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Confirm that you are using a Rosetta Terminal by entering the &lt;code&gt;arch&lt;/code&gt; command, which should return &lt;code&gt;i386&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F6iwm7ax2wcwo93sp3slz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F6iwm7ax2wcwo93sp3slz.png" alt="Terminal Arch command" width="304" height="70"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Next, install nvm:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;curl &lt;span class="nt"&gt;-o-&lt;/span&gt; https://raw.githubusercontent.com/nvm-sh/nvm/v0.37.2/install.sh | bash
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And then install your preferred node.js version. I'll be using version 12.x&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;nvm &lt;span class="nb"&gt;install &lt;/span&gt;12
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now, open up the Native Terminal with Spotlight:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F96fs29jgrwiopplt99hx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F96fs29jgrwiopplt99hx.png" alt="Spotlight Search Terminal" width="792" height="542"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Confirm that you are using the Native Terminal by typing the &lt;code&gt;arch&lt;/code&gt; command, which should return &lt;code&gt;arm64&lt;/code&gt;. While you are here, you can also validate the installation.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fokq4ppc2h4digvuul987.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fokq4ppc2h4digvuul987.png" alt="Terminal Tool Versions" width="447" height="557"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As you can see, nvm, npm and node.js version 12.x have all been successfully translated and installed on Apple Silicon. 🎉&lt;/p&gt;

&lt;p&gt;I recommend using the "Rosetta-Terminal" for installing the rest of your command line tools and using the Native Terminal for your daily workflow.&lt;/p&gt;

&lt;h2&gt;
  
  
  Finding and installing native applications
&lt;/h2&gt;

&lt;p&gt;Right now, there are still a few applications that don't offer full native support for Apple Silicon. So we have to install the x86_64 versions of these applications. This means that Rosetta will run in the background to translate the application and make it compatible to run on the M1, but this also means that it will not run in its fully ARM optimized glory.&lt;/p&gt;

&lt;p&gt;Before you install the rest of your applications, I recommend checking if they offer native support for Apple Silicon. Sometimes, a fully ARM native version is not available, but an ARM optimized beta version is. You can visit the website “&lt;a href="https://doesitarm.com" rel="noopener noreferrer"&gt;Does it ARM?&lt;/a&gt;” or &lt;a href="https://isapplesiliconready.com/" rel="noopener noreferrer"&gt;Is Apple silicon ready?&lt;/a&gt; and search for any app. It’s a great resource to find and install Apple Silicon versions of your apps.&lt;/p&gt;

&lt;h3&gt;
  
  
  Installing VS Code Insiders (beta)
&lt;/h3&gt;

&lt;p&gt;For example, this is what I see when I search for &lt;a href="https://doesitarm.com/app/vs-code/" rel="noopener noreferrer"&gt;VS Code&lt;/a&gt;:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F4yred243hkbapryl2eaf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F4yred243hkbapryl2eaf.png" alt="DoesItARM VS Code" width="800" height="374"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Currently, VS Code does not fully support a native Apple Silicon version of their software. But they do support a beta release version called Insiders with native support! If you want to try it out, head over to &lt;a href="https://code.visualstudio.com/insiders/" rel="noopener noreferrer"&gt;VSCode Insiders&lt;/a&gt;. Remember to select the ARM64 version of the application on the download page.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Ftjpva1704o5kdpa8dbxb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Ftjpva1704o5kdpa8dbxb.png" alt="VS Code Insiders" width="661" height="302"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After installation, open up Visual Studio Code - Insiders (and behold the blazing speed of the M1 🚀), enter cmd+shift+p to open up the Command Palette and install the 'code-insiders' command into your bash PATH.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Faxo6s12cwkxn6jvocr54.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Faxo6s12cwkxn6jvocr54.png" alt="VS Code Install CLT" width="800" height="264"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After that’s installed, you can open up files with VS Code from your terminal using the code-insiders command. But since nobody has time for that many keystrokes, add this alias to your zshrc or equivalent:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;alias &lt;/span&gt;&lt;span class="nv"&gt;code&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"code-insiders"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Restart your terminal and now you can open up files using the &lt;code&gt;code&lt;/code&gt; command – like a boss. 😎&lt;/p&gt;




&lt;p&gt;Other apps are not this difficult to find an Apple Silicon native version. For example, Chrome offers a fully supported ARM64 version of there software:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Ff0slbn11a2t6t7hp7yhy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Ff0slbn11a2t6t7hp7yhy.png" alt="DoesItARM Chrome" width="800" height="290"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;On the Chrome download page, make sure to select the version for "Mac with Apple Chip"&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Favkeybyw9tockoybxhg8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Favkeybyw9tockoybxhg8.png" alt="Select Chrome Version" width="626" height="698"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Final tip: Checking your app version
&lt;/h2&gt;

&lt;p&gt;At the time of this writing most applications do offer fully supported ARM64 versions of their software, but a few (like VS Code, &lt;a href="https://github.com/nvm-sh/nvm" rel="noopener noreferrer"&gt;Node Version Manager&lt;/a&gt; (nvm) and &lt;a href="https://brew.sh/" rel="noopener noreferrer"&gt;Homebrew&lt;/a&gt;) still do not. Over time we should expect to see fully supported Apple Silicon versions of our favorite apps.&lt;/p&gt;

&lt;p&gt;As you install applications on your new MacBook, you might notice that some auto-update to the new architecture, while others do not. For example, I noticed that Chrome auto-updated to a x86_64 version. If you suspect that one of your apps is running an older version, you can check the Activity Monitor and see what type is running on your machine:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fbq0kjijlaz3k1d7mvvxn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fbq0kjijlaz3k1d7mvvxn.png" alt="Activity Monitor" width="800" height="532"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;p&gt;I hope you find these tips helpful! If you have any additional tips or questions about how I set up my M1, don’t hesitate to reach out!&lt;/p&gt;

&lt;p&gt;Also you can find out more about where I work at &lt;a href="https://www.courier.com/" rel="noopener noreferrer"&gt;Courier&lt;/a&gt;&lt;/p&gt;

</description>
      <category>development</category>
      <category>technology</category>
      <category>tips</category>
      <category>apple</category>
    </item>
  </channel>
</rss>
