<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Davo Galavotti</title>
    <description>The latest articles on Forem by Davo Galavotti (@davo).</description>
    <link>https://forem.com/davo</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/davo"/>
    <language>en</language>
    <item>
      <title>Ranking AWS Lambda Runtimes with ChatGPT 4</title>
      <dc:creator>Davo Galavotti</dc:creator>
      <pubDate>Sat, 01 Apr 2023 18:33:52 +0000</pubDate>
      <link>https://forem.com/davo/ranking-aws-lambda-runtimes-with-chatgpt-4-48kh</link>
      <guid>https://forem.com/davo/ranking-aws-lambda-runtimes-with-chatgpt-4-48kh</guid>
      <description>&lt;p&gt;In the ever-evolving world of cloud computing, AWS Lambda functions have become increasingly popular due to their serverless nature and ease of use. With numerous languages and runtime options available, it can be difficult to decide which one is the best fit for your project.&lt;/p&gt;

&lt;p&gt;In this post, I'll show you how I used ChatGPT-4 to analyze the cold starts duration and avg. duration of different AWS Lambda runtimes based on data from the &lt;a href="https://maxday.github.io/lambda-perf/" rel="noopener noreferrer"&gt;Lambda Cold Starts analysis&lt;/a&gt; project by &lt;a href="https://maxday.dev/" rel="noopener noreferrer"&gt;Maxime David&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Through this exercise, I learned a ton about the relative performance and trade-offs between languages by comparing cold starts, initialization and average duration.&lt;/p&gt;

&lt;p&gt;But before getting into it, a quick disclaimer: this is not advice for choosing your next runtime for your AWS Lambda functions, the insights provided by Maxime David's analysis make it easier for developers to make informed decisions. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj1uhujhjw75x72ggb6ec.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj1uhujhjw75x72ggb6ec.png" alt="A screenshot of the Lambda Cold Starts analysis UI"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Methodology
&lt;/h4&gt;

&lt;p&gt;The Lambda Cold Start tool generates data by deploying a "hello-world" function in various AWS-supported and custom runtimes. Functions are packaged in zip files, uploaded to an S3 bucket, and invoked as cold starts 10 times daily. The REPORT log line, containing init duration, max memory used, and other information, is saved to a DynamoDB table. This information is aggregated, and a new JSON file is created and committed to &lt;a href="https://github.com/maxday/lambda-perf" rel="noopener noreferrer"&gt;Maxime's GitHub repo&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;A GitHub Pages-hosted static website fetches this JSON file, displaying results on a simple UI. The data is updated daily, ensuring accurate and up-to-date comparisons.&lt;/p&gt;

&lt;h2&gt;
  
  
  Comparing Runtimes using ChatGPT-4
&lt;/h2&gt;

&lt;p&gt;I started by copying the &lt;a href="https://github.com/maxday/lambda-perf/blob/main/data/2022-09-04.json" rel="noopener noreferrer"&gt;data generated&lt;/a&gt; by Maxime's tool and pasted it in ChatGPT using Model: GPT-4. No prompts, just the plain JSON file.&lt;/p&gt;

&lt;p&gt;ChatGPT automatically converted the JSON into perfectly formatted Markdown table, which quite was helpful to visualize the metadata and metrics.&lt;/p&gt;

&lt;p&gt;You can customize how the table is presented.&lt;/p&gt;

&lt;p&gt;Prompt&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
 Make a markdown table and sort the runtimes by Avg. Cold Start Duration (in seconds). Add averageMemoryUse. Use displayName as Runtime for the Table Header. Don't include runtime: value in the header.

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Runtime&lt;/th&gt;
&lt;th&gt;Avg. Cold Start Duration (sec)&lt;/th&gt;
&lt;th&gt;Avg. Memory Used (MB)&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Rust (prov.al2)&lt;/td&gt;
&lt;td&gt;0.016&lt;/td&gt;
&lt;td&gt;14.1&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Go (provided)&lt;/td&gt;
&lt;td&gt;0.052&lt;/td&gt;
&lt;td&gt;18.9&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Go 1.x&lt;/td&gt;
&lt;td&gt;0.089&lt;/td&gt;
&lt;td&gt;29.4&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Python 3.9&lt;/td&gt;
&lt;td&gt;0.112&lt;/td&gt;
&lt;td&gt;36.0&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Python 3.7&lt;/td&gt;
&lt;td&gt;0.115&lt;/td&gt;
&lt;td&gt;35.4&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Python 3.8&lt;/td&gt;
&lt;td&gt;0.117&lt;/td&gt;
&lt;td&gt;38.2&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;GraalVM Java17 (prov.al2)&lt;/td&gt;
&lt;td&gt;0.121&lt;/td&gt;
&lt;td&gt;33.5&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Dotnet7 AOT (prov.al2)&lt;/td&gt;
&lt;td&gt;0.120&lt;/td&gt;
&lt;td&gt;34.0&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Ruby 2.7&lt;/td&gt;
&lt;td&gt;0.142&lt;/td&gt;
&lt;td&gt;30.1&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Node.js 12.x&lt;/td&gt;
&lt;td&gt;0.154&lt;/td&gt;
&lt;td&gt;55.0&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Dotnetcore 3.1&lt;/td&gt;
&lt;td&gt;0.156&lt;/td&gt;
&lt;td&gt;51.0&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Node.js 16.x&lt;/td&gt;
&lt;td&gt;0.158&lt;/td&gt;
&lt;td&gt;57.4&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Node.js 14.x&lt;/td&gt;
&lt;td&gt;0.177&lt;/td&gt;
&lt;td&gt;56.5&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Node.js 18.x&lt;/td&gt;
&lt;td&gt;0.217&lt;/td&gt;
&lt;td&gt;67.0&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Dotnet 6&lt;/td&gt;
&lt;td&gt;0.225&lt;/td&gt;
&lt;td&gt;59.8&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Java 11 Snapstart&lt;/td&gt;
&lt;td&gt;0.228&lt;/td&gt;
&lt;td&gt;70.4&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Quarkus (prov.al2)&lt;/td&gt;
&lt;td&gt;0.233&lt;/td&gt;
&lt;td&gt;54.1&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Java 11&lt;/td&gt;
&lt;td&gt;0.471&lt;/td&gt;
&lt;td&gt;74.5&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Java 8&lt;/td&gt;
&lt;td&gt;0.532&lt;/td&gt;
&lt;td&gt;71.9&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h3&gt;
  
  
  Casual data analysis using prompt engineering
&lt;/h3&gt;

&lt;p&gt;I knew Rust was the faster runtime, thanks to Max's tool, but I wanted to know how much faster it was in simple terms.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Prompt&lt;/strong&gt;&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
 much faster Rust is compared to Python 3.9, 3.8, 3.7 and Node 12, 14, 16, 18? Use NLP by comparing it as "X time faster".

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;The analysis was extremely helpful because it automatically assumed "average durations" will be a useful metric to compare.&lt;/p&gt;

&lt;p&gt;ChatGPT compared the average durations of 🦀 Rust, 🐍 Python 3.7, 3.8, 3.9, Node.js 12.x, 14.x, 16.x, 18.x, and Go. Based on this comparison, Rust emerged as the fastest runtime, followed closely by Go. Node.js and Python runtimes showcased varying degrees of performance, with some versions faring better than others.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl9jb78j9d4khhzo32uo4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl9jb78j9d4khhzo32uo4.png" alt="ChatGPT 4 results for How much faster Rust is compared to Python and Node"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Later I asked &lt;code&gt;Add Go to the mix.&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvy4tpwllpli1n14l8khi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvy4tpwllpli1n14l8khi.png" alt="ChatGPT 4 results for How much faster Rust and Go, compared to Python and Node"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Cold Starts, Best vs. Worst
&lt;/h4&gt;

&lt;p&gt;Using NLP I was able to understand that Rust is 3.20x faster than Python 3.9 and 10.29x faster than Node.js 18.x, which is impressive, but I wanted to compare Cold Starts as well, after all that was the core idea of Maxime's tool. And also I wanted to learn how the best compares against the worst performers. &lt;/p&gt;

&lt;p&gt;Prompt:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
 compare Cold Starts now, how much better Rust is compared to Python 3.9, 3.8, 3.7 and Node 12, 14, 16, 18? Use NLP by comparing it as "cold start time is X times shorter".

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxcne7c3fcg15jeu00ylf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxcne7c3fcg15jeu00ylf.png" alt="ChatGPT 4 results for How much faster Rust and Go, compared to Python and Node"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Prompt:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
 how much better Rust is compared with the worst performers?

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6ywrhj08lpa7g9fx3h7w.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6ywrhj08lpa7g9fx3h7w.png" alt="ChatGPT 4 results for how much better Rust is compared with the worst performers"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Emojis makes everything better
&lt;/h3&gt;

&lt;p&gt;To visualize the performance of each runtime in a more evocative way, I asked ChatGPT to compare the runtimes using emojis to represent their relative speeds:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmy4pvtcp8sh8z7m3m6x7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmy4pvtcp8sh8z7m3m6x7.png" alt="ChatGPT 4 results for comparing the runtimes using emojis"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;So, Rust is 🚀 (Rocket)&lt;br&gt;
Go: 🏎️ (Race Car)&lt;br&gt;
Python 3.9: 🚗 (Car)&lt;br&gt;
Python 3.8: 🛵 (Scooter)&lt;br&gt;
Python 3.7: 🐢 (Turtle)&lt;br&gt;
Node.js 12.x: 🚂 (Train)&lt;br&gt;
Node.js 14.x: 🚆 (Light Rail)&lt;br&gt;
Node.js 16.x: 🛴 (Kick Scooter)&lt;br&gt;
Node.js 18.x: 🚲 (Bicycle)&lt;/p&gt;

&lt;p&gt;It's a fun and creative way to convert this light analysis and share it on Twitter.&lt;/p&gt;

&lt;p&gt;&lt;iframe class="tweet-embed" id="tweet-1642202287631155204-901" src="https://platform.twitter.com/embed/Tweet.html?id=1642202287631155204"&gt;
&lt;/iframe&gt;

  // Detect dark theme
  var iframe = document.getElementById('tweet-1642202287631155204-901');
  if (document.body.className.includes('dark-theme')) {
    iframe.src = "https://platform.twitter.com/embed/Tweet.html?id=1642202287631155204&amp;amp;theme=dark"
  }



&lt;/p&gt;

&lt;h3&gt;
  
  
  Hitting the token limit
&lt;/h3&gt;

&lt;p&gt;It's all fun and games, until you start noticing an issue.&lt;/p&gt;

&lt;p&gt;Since I was casually doing data analysis, hitting the ChatGPT 4 token limit (32K) was expected.&lt;/p&gt;

&lt;p&gt;What happens when you hit the token limit? Well, GPT will do what it does best, which is filling the gaps by predicting the next best value. So, even doing a light data analysis, this is a problem, because ChatGPT will respond to a prompt about sorting the table by any other criteria, but the data it will use will not be right. It was not even skewing the data, it was straight-up wrong. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzxge1adkkscqqs8ys3pa.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzxge1adkkscqqs8ys3pa.png" alt="Prompt result came back with false statement"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;So, if I wanted to keep doing "data analysis", I'll need to reset context, because ChatGPT also uses the chat history as part of that token buffer. The fix? Just paste the JSON file once again, and start over.&lt;/p&gt;

&lt;p&gt;Prompt:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
 Make a markdown table and sort the runtimes by Avg. Cold Start Duration (in seconds). Add averageMemoryUse. Use displayName as Runtime for the Table Header. Don't include [runtime: value] in the header.

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhx7603bjmlzk1ndhvki9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhx7603bjmlzk1ndhvki9.png" alt="Prompt result Make a markdown table and sort the runtimes by Avg. Cold Start Duration (in seconds). Add averageMemoryUse. Use displayName as Runtime for the Table Header. Don't include [runtime: value] in the header."&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Takeaways
&lt;/h3&gt;

&lt;p&gt;The &lt;a href="https://maxday.github.io/lambda-perf/" rel="noopener noreferrer"&gt;Lambda Cold Starts analysis&lt;/a&gt; by Maxime David provides really valuable insights into the performance of various AWS Lambda runtimes, making it easier for developers to choose the right runtime for their projects.&lt;/p&gt;

&lt;p&gt;I used ChatGPT and it proved to be a useful tool in processing the JSON payload data and presenting the data in an accessible and engaging format, and in the process I learned a bunch of things I was not aware of.&lt;/p&gt;

&lt;p&gt;Again, it's extremely important for me to clarify that this is not advice for choosing your next runtime for your AWS Lambda functions and serverless workflows. You should consider the relative performance and trade-offs between languages, and a gauntlet of other variables.&lt;br&gt;
Regardless, it was a simple and fun exercise to weigh the benefits of faster runtimes like Rust and Go against the availability of libraries for languages like Python and Node.js.&lt;/p&gt;

&lt;p&gt;If you're interested in learning more about Serverless from the perspective of a product designer &amp;amp; frontend developer, follow me on Twitter, &lt;a href="https://twitter.com/pixelbeat" rel="noopener noreferrer"&gt;@pixelbeat&lt;/a&gt;, as I'm working my way into Cloud Development.&lt;/p&gt;

&lt;p&gt;&lt;iframe class="tweet-embed" id="tweet-1642202586437636097-426" src="https://platform.twitter.com/embed/Tweet.html?id=1642202586437636097"&gt;
&lt;/iframe&gt;

  // Detect dark theme
  var iframe = document.getElementById('tweet-1642202586437636097-426');
  if (document.body.className.includes('dark-theme')) {
    iframe.src = "https://platform.twitter.com/embed/Tweet.html?id=1642202586437636097&amp;amp;theme=dark"
  }



&lt;/p&gt;

&lt;p&gt;Source Data: &lt;a href="https://maxday.github.io/lambda-perf/" rel="noopener noreferrer"&gt;Lambda Cold Starts analysis&lt;/a&gt; by &lt;a href="https://maxday.dev/" rel="noopener noreferrer"&gt;Maxime David&lt;/a&gt;.&lt;br&gt;
Source Code: &lt;a href="https://github.com/maxday/lambda-perf" rel="noopener noreferrer"&gt;Lambda Cold Starts Github Repo&lt;/a&gt; by &lt;a href="https://maxday.dev/" rel="noopener noreferrer"&gt;Maxime David&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>serverless</category>
      <category>aws</category>
      <category>chatgpt</category>
    </item>
  </channel>
</rss>
