<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Dimitrios Kechagias</title>
    <description>The latest articles on Forem by Dimitrios Kechagias (@dkechag).</description>
    <link>https://forem.com/dkechag</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/dkechag"/>
    <language>en</language>
    <item>
      <title>Cloud VM benchmarks 2026: performance / price</title>
      <dc:creator>Dimitrios Kechagias</dc:creator>
      <pubDate>Fri, 27 Feb 2026 14:05:25 +0000</pubDate>
      <link>https://forem.com/dkechag/cloud-vm-benchmarks-2026-performance-price-1i1m</link>
      <guid>https://forem.com/dkechag/cloud-vm-benchmarks-2026-performance-price-1i1m</guid>
      <description>&lt;p&gt;Time for the (not exactly) yearly cloud compute VM comparison. I started testing back in October 2025, but the benchmarking scope was increased, not just due to more VM families tested (44), but also due to testing the instances over more regions to attain a possible range of performance, as in many cases not all instances are created equal. I will not spoil much if I tell you that there is one new CPU that dominates the top-end results more clearly than any previous year.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Quick Overview&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Like &lt;a href="https://dev.to/dkechag/cloud-provider-comparison-2024-vm-performance-price-3h4l"&gt;last time&lt;/a&gt;, this is all about &lt;strong&gt;generic CPU performance&lt;/strong&gt; and especially what you can actually &lt;strong&gt;get per $ spent&lt;/strong&gt; on compute VM instances. Due to the focus on CPU workloads, burstable instances are not included. Single-thread performance is evaluated separately, as there are always workloads that cannot be further parallelized. For multi-thread, each instance type is tested in a 2vCPU configuration which is usually the minimum unit you can order (it corresponds to a single core for SMT-enabled systems, like all Intel and most AMD). The more threads your workload can utilize, the more multiples of that unit you can order.&lt;/p&gt;

&lt;p&gt;The comparison should help you maximize performance or price depending on your requirements, by either using the optimal VM types of your provider, or perhaps by launching on a different provider. &lt;/p&gt;

&lt;p&gt;If you don't need all the details, you can use the TOC below to jump to what's relevant to you.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Table Of Contents:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;What's new&lt;/li&gt;
&lt;li&gt;
The contenders (2026 edition)

&lt;ul&gt;
&lt;li&gt;Amazon Web Services (AWS)&lt;/li&gt;
&lt;li&gt;Google Cloud Platform (GCP)&lt;/li&gt;
&lt;li&gt;Microsoft Azure&lt;/li&gt;
&lt;li&gt;Oracle Cloud Infrastructure (OCI)&lt;/li&gt;
&lt;li&gt;Akamai (Linode)&lt;/li&gt;
&lt;li&gt;DigitalOcean&lt;/li&gt;
&lt;li&gt;Hetzner&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;Test setup &amp;amp; Benchmarking methodology&lt;/li&gt;

&lt;li&gt;

Results

&lt;ul&gt;
&lt;li&gt;Single-thread Performance&lt;/li&gt;
&lt;li&gt;Multi-thread Performance &amp;amp; Scalability&lt;/li&gt;
&lt;li&gt;Performance / Price (On Demand / Pay As You Go)&lt;/li&gt;
&lt;li&gt;Performance / Price (1-Year reserved)&lt;/li&gt;
&lt;li&gt;Performance / Price (3-Year reserved)&lt;/li&gt;
&lt;li&gt;Performance / Price (Spot / Preemptible VMs)&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

Conclusions

&lt;ul&gt;
&lt;li&gt;Overview&lt;/li&gt;
&lt;li&gt;General Tips&lt;/li&gt;
&lt;li&gt;Caveats for AMD vs Intel vs ARM&lt;/li&gt;
&lt;li&gt;Recommendations per use-case&lt;/li&gt;
&lt;li&gt;Summary per cloud provider&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;h2&gt;
  
  
  What's new
&lt;/h2&gt;

&lt;p&gt;I kept the same &lt;strong&gt;7 providers&lt;/strong&gt; as last year (which was down from my max 10 providers from &lt;a href="https://dev.to/dkechag/cloud-vm-performance-value-comparison-2023-perl-more-1kpp"&gt;the 2023 comparison&lt;/a&gt;), but expanded to &lt;strong&gt;44 VM types&lt;/strong&gt; tested.&lt;/p&gt;

&lt;p&gt;Other changes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;New CPUs:&lt;/strong&gt; AMD &lt;strong&gt;EPYC Turin&lt;/strong&gt; (whose performance &lt;a href="https://dev.to/dkechag/c4d-vms-on-google-cloud-breaking-records-again-with-epyc-turin-14ic"&gt;I had explored separately&lt;/a&gt;) and Intel &lt;strong&gt;Granite Rapids&lt;/strong&gt; are available on the x86 front, while several new ARM solutions are tested: Google &lt;strong&gt;Axion&lt;/strong&gt; (&lt;a href="https://dev.to/dkechag/google-axion-a-new-leader-in-arm-server-performance-4im9"&gt;also explored separately&lt;/a&gt; last year), Azure &lt;strong&gt;Cobalt 100&lt;/strong&gt; and Ampere &lt;strong&gt;AmpereOne M&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;More testing:&lt;/strong&gt; Some extra benchmarks added. More testing across regions. In the past I only focused on that for small providers, but the big-three have also shown inconsistency, so the main performance and performance/price numbers will show a range.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  The contenders (2026 edition)
&lt;/h2&gt;

&lt;p&gt;As mentioned, I will focus on &lt;strong&gt;2x vCPU&lt;/strong&gt; instances, as that's the minimum scalable unit for a meaningful comparison (and generally minimum for several VM types), given that most &lt;strong&gt;AMD&lt;/strong&gt; and &lt;strong&gt;Intel&lt;/strong&gt; instances use Hyper-Threading (HT) / Simultaneous Multi Threading (SMT). So, for those systems a vCPU is a Hyper-Thread, or half a core, with the 2x vCPU instance giving you a full core with 2 threads. This will become clear in the scalability section.&lt;/p&gt;

&lt;p&gt;I am skipping some very old instance types that are obviously uncompetitive. I am still trying to configure at &lt;strong&gt;2GB/vCPU&lt;/strong&gt; of &lt;strong&gt;RAM&lt;/strong&gt; (which is variably considered as "compute-optimized", or "general-purpose") and &lt;strong&gt;30GB SSD&lt;/strong&gt; (not high-IOPS) boot disk for the price comparison to make sense (exceptions will be noted).&lt;/p&gt;

&lt;p&gt;The &lt;strong&gt;pay-as-you-go&lt;/strong&gt;/&lt;strong&gt;on-demand&lt;/strong&gt; prices refer to the lower cost region in the US (or Europe). For providers with variable pricing, cheapest regions are almost always in the US. Unlike last year, I will not include the &lt;strong&gt;100% sustained&lt;/strong&gt; discounts for GCP, as they are not technically &lt;strong&gt;on-demand&lt;/strong&gt; so I may have been unfair to other providers.&lt;/p&gt;

&lt;p&gt;For providers that offer &lt;strong&gt;1 year&lt;/strong&gt; and &lt;strong&gt;3 year&lt;/strong&gt; committed/reserved discounted prices, the no-downpayment price was listed with that option. The prices were valid for &lt;strong&gt;January 2026&lt;/strong&gt; - please check for current prices before making final decisions.&lt;/p&gt;

&lt;p&gt;As a guide, here is an overview of the various generations of AMD, Intel and ARM CPUs from older (top) to newer (bottom), roughly grouped horizontally in per-core performance tiers, based on this and the previous comparison results:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fihxwrtvu3iezsrccaih2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fihxwrtvu3iezsrccaih2.png" alt="Chart of CPU rankings" width="480" height="391"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This should immediately give you an idea of roughly what performance tier to expect based on the CPU type alone, with the important note that for SMT-enabled instances you get a single core for every 2x vCPUs.&lt;/p&gt;

&lt;p&gt;A general tip is that you should avoid old CPU generations, as due to their lower efficiency (higher running costs) the cloud providers will actually &lt;strong&gt;charge you more for less performance&lt;/strong&gt;. I will even not include types that were already too old to provide good value last year, to focus on the more relevant products.&lt;/p&gt;

&lt;h3&gt;
  
  
  Amazon Web Services (AWS)
&lt;/h3&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Instance Type&lt;/th&gt;
&lt;th&gt;CPU type&lt;/th&gt;
&lt;th&gt;RAM/SSD&lt;/th&gt;
&lt;th&gt;Price $/Month&lt;/th&gt;
&lt;th&gt;1Y Res. $/Month&lt;/th&gt;
&lt;th&gt;3Y Res. $/Month&lt;/th&gt;
&lt;th&gt;Spot $/Month&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;C5a.large (R)&lt;/td&gt;
&lt;td&gt;AMD Rome&lt;/td&gt;
&lt;td&gt;4/30&lt;/td&gt;
&lt;td&gt;64.45&lt;/td&gt;
&lt;td&gt;41.09&lt;/td&gt;
&lt;td&gt;29.41&lt;/td&gt;
&lt;td&gt;29.08&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;C5.large (S)&lt;/td&gt;
&lt;td&gt;Intel Skylake&lt;/td&gt;
&lt;td&gt;4/30&lt;/td&gt;
&lt;td&gt;68.10&lt;/td&gt;
&lt;td&gt;45.47&lt;/td&gt;
&lt;td&gt;31.60&lt;/td&gt;
&lt;td&gt;28.02&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;C6a.large (M)&lt;/td&gt;
&lt;td&gt;AMD Milan&lt;/td&gt;
&lt;td&gt;4/30&lt;/td&gt;
&lt;td&gt;63.83&lt;/td&gt;
&lt;td&gt;43.04&lt;/td&gt;
&lt;td&gt;29.45&lt;/td&gt;
&lt;td&gt;28.20&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;C6i.large (I)&lt;/td&gt;
&lt;td&gt;Intel Ice Lake&lt;/td&gt;
&lt;td&gt;4/30&lt;/td&gt;
&lt;td&gt;70.66&lt;/td&gt;
&lt;td&gt;47.55&lt;/td&gt;
&lt;td&gt;32.45&lt;/td&gt;
&lt;td&gt;29.02&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;C6g.large (G2)&lt;/td&gt;
&lt;td&gt;AWS Graviton2&lt;/td&gt;
&lt;td&gt;4/30&lt;/td&gt;
&lt;td&gt;55.03&lt;/td&gt;
&lt;td&gt;36.64&lt;/td&gt;
&lt;td&gt;26.86&lt;/td&gt;
&lt;td&gt;26.61&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;C7a.large (G)&lt;/td&gt;
&lt;td&gt;AMD Genoa&lt;/td&gt;
&lt;td&gt;4/30&lt;/td&gt;
&lt;td&gt;84.82&lt;/td&gt;
&lt;td&gt;56.92&lt;/td&gt;
&lt;td&gt;38.69&lt;/td&gt;
&lt;td&gt;32.07&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;C7i.large (SR)&lt;/td&gt;
&lt;td&gt;Intel Sapphire Rapids&lt;/td&gt;
&lt;td&gt;4/30&lt;/td&gt;
&lt;td&gt;74.07&lt;/td&gt;
&lt;td&gt;49.81&lt;/td&gt;
&lt;td&gt;33.95&lt;/td&gt;
&lt;td&gt;24.62&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;C7g.large (G3)&lt;/td&gt;
&lt;td&gt;AWS Graviton3&lt;/td&gt;
&lt;td&gt;4/30&lt;/td&gt;
&lt;td&gt;58.46&lt;/td&gt;
&lt;td&gt;40.65&lt;/td&gt;
&lt;td&gt;28.97&lt;/td&gt;
&lt;td&gt;29.31&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;C8a.large (T)&lt;/td&gt;
&lt;td&gt;AMD Turin&lt;/td&gt;
&lt;td&gt;4/30&lt;/td&gt;
&lt;td&gt;88.94&lt;/td&gt;
&lt;td&gt;64.94&lt;/td&gt;
&lt;td&gt;44.19&lt;/td&gt;
&lt;td&gt;31.82&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;C8i.large (GR)&lt;/td&gt;
&lt;td&gt;Intel Granite Rapids&lt;/td&gt;
&lt;td&gt;4/30&lt;/td&gt;
&lt;td&gt;77.65&lt;/td&gt;
&lt;td&gt;51.84&lt;/td&gt;
&lt;td&gt;35.43&lt;/td&gt;
&lt;td&gt;28.74&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;C8g.large (G4)&lt;/td&gt;
&lt;td&gt;AWS Graviton4&lt;/td&gt;
&lt;td&gt;4/30&lt;/td&gt;
&lt;td&gt;66.22&lt;/td&gt;
&lt;td&gt;44.62&lt;/td&gt;
&lt;td&gt;30.50&lt;/td&gt;
&lt;td&gt;27.93&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;&lt;a href="https://aws.amazon.com/" rel="noopener noreferrer"&gt;Amazon Web Services&lt;/a&gt; (&lt;strong&gt;AWS&lt;/strong&gt;) pretty much originated the whole "cloud provider" business - even though smaller connected VM providers predated it significantly (e.g. Linode comes to mind) - and still dominates the market. The &lt;strong&gt;AWS&lt;/strong&gt; platform offers extensive services, but, of course, we are only looking at their Elastic Cloud (EC2) VM offerings for this comparison.&lt;/p&gt;

&lt;p&gt;There are 2 new CPUs introduced since last year. Intel's &lt;strong&gt;Granite Rapids&lt;/strong&gt; makes an appearance, while the AMD &lt;strong&gt;EPYC Turin&lt;/strong&gt;-powered &lt;strong&gt;C8a&lt;/strong&gt; follows the previous &lt;strong&gt;C7a&lt;/strong&gt; in having SMT disabled (providing a full core per vCPU). I don't want to spoil much, but if you take the fastest CPU by a margin, and disable SMT, expect some impressive "per-2vCPU" results...&lt;/p&gt;

&lt;p&gt;With EC2 instances you generally know what you are getting (instance type corresponds to specific CPU), although there's a multitude of ways to pay/reserve/prepay/etc which makes pricing very complicated, and pricing further varies by region (I used the lowest cost US regions). In the 1Y/3Y reserved prices listed, there is no prepayment included - you can lower them a bit further if you do prepay. The spot prices vary even more, both by region and are updated often (especially for newly introduced types), so you'd want to keep track of them.&lt;/p&gt;

&lt;h3&gt;
  
  
  Google Cloud Platform (GCP)
&lt;/h3&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Instance Type&lt;/th&gt;
&lt;th&gt;CPU type&lt;/th&gt;
&lt;th&gt;RAM/SSD&lt;/th&gt;
&lt;th&gt;Price $/Month&lt;/th&gt;
&lt;th&gt;1Y Res. $/Month&lt;/th&gt;
&lt;th&gt;3Y Res. $/Month&lt;/th&gt;
&lt;th&gt;Spot $/Month&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;n2-2* (I)&lt;/td&gt;
&lt;td&gt;Intel Ice Lake&lt;/td&gt;
&lt;td&gt;4/30&lt;/td&gt;
&lt;td&gt;63.45&lt;/td&gt;
&lt;td&gt;40.19&lt;/td&gt;
&lt;td&gt;29.65&lt;/td&gt;
&lt;td&gt;22.15&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;n2d-2* (M)&lt;/td&gt;
&lt;td&gt;AMD Milan&lt;/td&gt;
&lt;td&gt;4/30&lt;/td&gt;
&lt;td&gt;55.46&lt;/td&gt;
&lt;td&gt;35.22&lt;/td&gt;
&lt;td&gt;26.06&lt;/td&gt;
&lt;td&gt;13.10&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;c2d-2 (M)&lt;/td&gt;
&lt;td&gt;AMD Milan&lt;/td&gt;
&lt;td&gt;4/30&lt;/td&gt;
&lt;td&gt;68.28&lt;/td&gt;
&lt;td&gt;43.76&lt;/td&gt;
&lt;td&gt;31.82&lt;/td&gt;
&lt;td&gt;15.87&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;t2d-2 (M)&lt;/td&gt;
&lt;td&gt;AMD Milan&lt;/td&gt;
&lt;td&gt;8/30&lt;/td&gt;
&lt;td&gt;63.68&lt;/td&gt;
&lt;td&gt;40.86&lt;/td&gt;
&lt;td&gt;29.76&lt;/td&gt;
&lt;td&gt;12.14&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;c3-4/2** (SR)&lt;/td&gt;
&lt;td&gt;Intel Sapphire Rapids&lt;/td&gt;
&lt;td&gt;4/30&lt;/td&gt;
&lt;td&gt;63.69&lt;/td&gt;
&lt;td&gt;40.72&lt;/td&gt;
&lt;td&gt;29.54&lt;/td&gt;
&lt;td&gt;11.09&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;c3d-4/2** (G)&lt;/td&gt;
&lt;td&gt;AMD Genoa&lt;/td&gt;
&lt;td&gt;4/30&lt;/td&gt;
&lt;td&gt;56.32&lt;/td&gt;
&lt;td&gt;36.08&lt;/td&gt;
&lt;td&gt;26.23&lt;/td&gt;
&lt;td&gt;9.90&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;n4d-2 (T)&lt;/td&gt;
&lt;td&gt;AMD Turin&lt;/td&gt;
&lt;td&gt;4/30&lt;/td&gt;
&lt;td&gt;53.77&lt;/td&gt;
&lt;td&gt;34.46&lt;/td&gt;
&lt;td&gt;25.08&lt;/td&gt;
&lt;td&gt;22.47&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;c4a-2 (AX)&lt;/td&gt;
&lt;td&gt;Google Axion (Arm)&lt;/td&gt;
&lt;td&gt;4/30&lt;/td&gt;
&lt;td&gt;56.90&lt;/td&gt;
&lt;td&gt;38.09&lt;/td&gt;
&lt;td&gt;26.49&lt;/td&gt;
&lt;td&gt;19.74&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;c4d-2 (T)&lt;/td&gt;
&lt;td&gt;AMD Turin&lt;/td&gt;
&lt;td&gt;3/30&lt;/td&gt;
&lt;td&gt;57.57&lt;/td&gt;
&lt;td&gt;36.86&lt;/td&gt;
&lt;td&gt;26.79&lt;/td&gt;
&lt;td&gt;23.40&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;n4-2 (E)&lt;/td&gt;
&lt;td&gt;Intel Emerald Rapids&lt;/td&gt;
&lt;td&gt;4/30&lt;/td&gt;
&lt;td&gt;57.47&lt;/td&gt;
&lt;td&gt;36.80&lt;/td&gt;
&lt;td&gt;26.74&lt;/td&gt;
&lt;td&gt;19.50&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;c4-2 (E)&lt;/td&gt;
&lt;td&gt;Intel Emerald Rapids&lt;/td&gt;
&lt;td&gt;4/30&lt;/td&gt;
&lt;td&gt;63.69&lt;/td&gt;
&lt;td&gt;40.72&lt;/td&gt;
&lt;td&gt;29.54&lt;/td&gt;
&lt;td&gt;27.20&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;c4-lssd-4/2** (GR)&lt;/td&gt;
&lt;td&gt;Intel Granite Rapids&lt;/td&gt;
&lt;td&gt;8/30+375GB SSD&lt;/td&gt;
&lt;td&gt;103.75&lt;/td&gt;
&lt;td&gt;65.45&lt;/td&gt;
&lt;td&gt;47.57&lt;/td&gt;
&lt;td&gt;43.70&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;&lt;sup&gt;* &lt;em&gt;&lt;code&gt;min_cpu_platform&lt;/code&gt; needs to be set to get tested CPU.&lt;/em&gt;&lt;/sup&gt;&lt;br&gt;
&lt;sup&gt;** &lt;em&gt;Extrapolated 2x vCPU instance - type requires 4x vCPU minimum size.&lt;/em&gt;&lt;/sup&gt;&lt;/p&gt;

&lt;p&gt;The &lt;a href="https://cloud.google.com/" rel="noopener noreferrer"&gt;GCP Platform&lt;/a&gt; (&lt;strong&gt;GCP&lt;/strong&gt;) follows &lt;strong&gt;AWS&lt;/strong&gt; quite closely, providing mostly equivalent services, but lags in market share (3rd place, after &lt;strong&gt;Microsoft Azure&lt;/strong&gt;). We are looking at the Google Compute Engine (GCE) VM offerings, which is one of the most interesting in respect to configurability and range of different instance types. However, this variety makes it harder to choose the right one for the task, which is exactly what prompted me to start benchmarking all the available types. To add extra confusion, some types may come with an older (slower) CPU if you don't set &lt;code&gt;min_cpu_platform&lt;/code&gt; to the latest available for the type - so you need the extra configuration to get a faster machine for the same price.&lt;/p&gt;

&lt;p&gt;This year, we have the addition of the AMD &lt;strong&gt;EPYC Turin&lt;/strong&gt; (&lt;strong&gt;c4d&lt;/strong&gt; and &lt;strong&gt;n4d&lt;/strong&gt;), they are not yet in all regions/zones, but availability is expanding. We also had the introduction of two Intel-based 4th gen instances (&lt;strong&gt;n4&lt;/strong&gt; and &lt;strong&gt;c4&lt;/strong&gt;). They both feature &lt;strong&gt;Emerald Rapids&lt;/strong&gt;, however the latter can be configured with a local SSD, in which case they come with the newer Intel &lt;strong&gt;Granite Rapids&lt;/strong&gt;. Until GCP allows setting &lt;code&gt;min_cpu_platform&lt;/code&gt; to &lt;strong&gt;Granite Rapids&lt;/strong&gt; (they are thinking about it AFAIK), you have to pay for the extra SSD to get the performance. Last year &lt;a href="https://dev.to/dkechag/google-axion-a-new-leader-in-arm-server-performance-4im9"&gt;I covered separately&lt;/a&gt; the introduction of the Google &lt;strong&gt;Axion&lt;/strong&gt; - powered &lt;strong&gt;c4a&lt;/strong&gt; ARM type, but it is on a full VM comparison for the first time.&lt;/p&gt;

&lt;p&gt;At this point, I should mention that the reason I did more extensive testing this year across different regions is the disappointing performance of &lt;strong&gt;Emerald Rapids&lt;/strong&gt; in practice, compared to its showing on my original benchmarks. It seems that as it started to get used, exhibited a performance variance that looks consistent with boost behavior + node contention (i.e. more sensitive to noisy neighbors). I suspect this is why GCP offers the option to turn boost clock off in &lt;strong&gt;Emerald Rapids&lt;/strong&gt; instances for "&lt;em&gt;consistent performance&lt;/em&gt;".&lt;/p&gt;

&lt;p&gt;GCP prices vary per region and feature some strange patterns. For example when you reserve, &lt;strong&gt;t2d&lt;/strong&gt; instances which give you a full AMD &lt;strong&gt;EPYC&lt;/strong&gt; core per vCPU and &lt;strong&gt;n2d&lt;/strong&gt; instances which give you a Simultaneous Multi-Thread (i.e. HALF a core) per vCPU have the same price per vCPU, but &lt;strong&gt;n2d&lt;/strong&gt; is cheaper on demand and gets a 20% discount for sustained monthly use.&lt;/p&gt;

&lt;p&gt;Note that &lt;strong&gt;c3&lt;/strong&gt;, &lt;strong&gt;c3d&lt;/strong&gt; and &lt;strong&gt;c4-lssd&lt;/strong&gt; types have a 4x vCPU minimum. This breaks the price comparison, so I am extrapolating to a 2x vCPU price (half the cost of CPU/RAM + full cost of 30GB SSD). GCP gives you the option to disable cores (you select "visible" cores), so while you have to pay for 4x vCPU minimum, you can still run benchmarks on a 2x vCPU instance for a fair comparison.&lt;/p&gt;
&lt;h3&gt;
  
  
  Microsoft Azure
&lt;/h3&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Instance Type&lt;/th&gt;
&lt;th&gt;CPU type&lt;/th&gt;
&lt;th&gt;RAM/SSD&lt;/th&gt;
&lt;th&gt;Price $/Month&lt;/th&gt;
&lt;th&gt;1Y Res. $/Month&lt;/th&gt;
&lt;th&gt;3Y Res. $/Month&lt;/th&gt;
&lt;th&gt;Spot $/Month&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;D2as_v5 (M)&lt;/td&gt;
&lt;td&gt;AMD Milan&lt;/td&gt;
&lt;td&gt;8/32&lt;/td&gt;
&lt;td&gt;65.18&lt;/td&gt;
&lt;td&gt;39.40&lt;/td&gt;
&lt;td&gt;26.26&lt;/td&gt;
&lt;td&gt;15.16&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;D2ls_v5 (I)&lt;/td&gt;
&lt;td&gt;Intel Ice Lake&lt;/td&gt;
&lt;td&gt;4/32&lt;/td&gt;
&lt;td&gt;64.45&lt;/td&gt;
&lt;td&gt;38.98&lt;/td&gt;
&lt;td&gt;25.98&lt;/td&gt;
&lt;td&gt;17.37&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;D2pls_v5 (A)&lt;/td&gt;
&lt;td&gt;Ampere Altra&lt;/td&gt;
&lt;td&gt;4/32&lt;/td&gt;
&lt;td&gt;52.04&lt;/td&gt;
&lt;td&gt;31.65&lt;/td&gt;
&lt;td&gt;21.26&lt;/td&gt;
&lt;td&gt;11.57&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;D2pls_v6 (CO)&lt;/td&gt;
&lt;td&gt;Azure Cobalt 100&lt;/td&gt;
&lt;td&gt;4/32&lt;/td&gt;
&lt;td&gt;47.66&lt;/td&gt;
&lt;td&gt;29.07&lt;/td&gt;
&lt;td&gt;19.59&lt;/td&gt;
&lt;td&gt;11.18&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;D2ls_v6 (E)&lt;/td&gt;
&lt;td&gt;Intel Emerald Rapids&lt;/td&gt;
&lt;td&gt;4/32&lt;/td&gt;
&lt;td&gt;67.59&lt;/td&gt;
&lt;td&gt;42.82&lt;/td&gt;
&lt;td&gt;27.82&lt;/td&gt;
&lt;td&gt;20.04&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;D2als_v6 (G)&lt;/td&gt;
&lt;td&gt;AMD Genoa&lt;/td&gt;
&lt;td&gt;4/32&lt;/td&gt;
&lt;td&gt;61.09&lt;/td&gt;
&lt;td&gt;37.07&lt;/td&gt;
&lt;td&gt;24.71&lt;/td&gt;
&lt;td&gt;13.36&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;&lt;a href="https://azure.microsoft.com/en-gb/" rel="noopener noreferrer"&gt;Azure&lt;/a&gt; is the #2 overall Cloud provider and, as expected, it's the best choice for most Microsoft/Windows-based solutions. That said, it does offer many types of Linux VMs, with quite similar abilities as &lt;strong&gt;AWS&lt;/strong&gt;/&lt;strong&gt;GCP&lt;/strong&gt;. The various types are not easy to use as on &lt;strong&gt;AWS&lt;/strong&gt;/&lt;strong&gt;GCP&lt;/strong&gt; though, for some reason even enterprise accounts start with zero quota on many types, so I had to request quota increases to even test tiny instances.&lt;/p&gt;

&lt;p&gt;The v6 instances are new for the comparison, featuring AMD &lt;strong&gt;EPYC Genoa&lt;/strong&gt;, Intel &lt;strong&gt;Emerald Rapids&lt;/strong&gt; and Azure's own &lt;strong&gt;Cobalt 100&lt;/strong&gt; ARM CPU.&lt;/p&gt;

&lt;p&gt;The &lt;strong&gt;Azure&lt;/strong&gt; pricing is at least as complex as &lt;strong&gt;AWS&lt;/strong&gt;/&lt;strong&gt;GCP&lt;/strong&gt;, plus the pricing tool seems worse. They also lag behind the other two major providers in CPU releases - &lt;strong&gt;Turin&lt;/strong&gt; and &lt;strong&gt;Granite Rapids&lt;/strong&gt; are still in closed preview at the time of writing this.&lt;/p&gt;
&lt;h3&gt;
  
  
  Oracle Cloud Infrastructure (OCI)
&lt;/h3&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Instance Type&lt;/th&gt;
&lt;th&gt;CPU type&lt;/th&gt;
&lt;th&gt;RAM/SSD&lt;/th&gt;
&lt;th&gt;Price $/Month&lt;/th&gt;
&lt;th&gt;Spot $/Month&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Standard.E6 (T)&lt;/td&gt;
&lt;td&gt;AMD Turin&lt;/td&gt;
&lt;td&gt;4/30&lt;/td&gt;
&lt;td&gt;29.00&lt;/td&gt;
&lt;td&gt;15.13&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Standard.A1 (A)&lt;/td&gt;
&lt;td&gt;Ampere Altra&lt;/td&gt;
&lt;td&gt;4/30&lt;/td&gt;
&lt;td&gt;20.24&lt;/td&gt;
&lt;td&gt;10.75&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Standard.A2 (AO)&lt;/td&gt;
&lt;td&gt;Ampere AmpereOne&lt;/td&gt;
&lt;td&gt;4/30&lt;/td&gt;
&lt;td&gt;17.32&lt;/td&gt;
&lt;td&gt;17.32&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Standard.A4* (AM)&lt;/td&gt;
&lt;td&gt;Ampere AmpereOne M&lt;/td&gt;
&lt;td&gt;4/30&lt;/td&gt;
&lt;td&gt;19.22&lt;/td&gt;
&lt;td&gt;10.24&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;&lt;sup&gt;* &lt;em&gt;Limited availability currently.&lt;/em&gt;&lt;/sup&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.oracle.com/cloud/" rel="noopener noreferrer"&gt;Oracle Cloud Infrastructure&lt;/a&gt; (&lt;strong&gt;OCI&lt;/strong&gt;) was the biggest surprise in my &lt;a href="https://dev.to/dkechag/cloud-vm-performance-value-comparison-2023-perl-more-1kpp"&gt;2023 comparison test&lt;/a&gt;. It was a pleasant surprise, not only does &lt;strong&gt;Oracle&lt;/strong&gt; offer by far the most generous free tier (credits for the &lt;strong&gt;A1&lt;/strong&gt; type ARM VM credits equivalent to sustained 4x vCPU, 24GB RAM, 200GB disk for free, forever), their paid ARM instances were the best value across all providers - especially for on-demand. The free resources are enough for quite a few hobby projects - they would cost you well over $100/month in the big-3 providers.&lt;/p&gt;

&lt;p&gt;Note that registration is a bit draconian to avoid abuse, make sure you are not on a VPN and also don't use &lt;em&gt;oracle&lt;/em&gt; anywhere in the email address you use for registration. You start with a "free" account, which gives you access to a limited selection of services and apart from the free-tier eligible &lt;strong&gt;A1&lt;/strong&gt; VMs, you'll struggle to build any other types with the free credit you get at the start.&lt;/p&gt;

&lt;p&gt;Upgrading to a regular paid account (which still gives you the free tier credits), you get a selection of VMs. New this year are the AMD &lt;strong&gt;EPYC Turin Standard.E6&lt;/strong&gt; VMs and the next generation ARM &lt;strong&gt;Standard.A4&lt;/strong&gt; type powered by the &lt;strong&gt;AmpereOne M&lt;/strong&gt; CPU. If you recall from last year, the &lt;strong&gt;AmpereOne A2&lt;/strong&gt; instances were slower in quite a few tasks than the older &lt;strong&gt;Altra A1&lt;/strong&gt;. Ampere really needed a step forward, and &lt;strong&gt;AmpereOne M (A4)&lt;/strong&gt; finally delivers meaningful gains in this year’s dataset. I had trouble building older-gen AMD instances, so in the end I did not include them. I also could only build &lt;strong&gt;Standard.A4&lt;/strong&gt; in one region (Ashburn), even though I tried in Phoenix which Oracle had in the availability list, to no avail.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Oracle&lt;/strong&gt; Cloud's prices are the same across all regions, which is nice. They do not offer any reserved discounts, but do offer a 50% discount for preemptible (spot) instances. One complication is that their prices are per "Oracle CPU" (OCPU). This seemed to make sense originally, as it corresponded to physical cores - the A1 instances had 1 OCPU per core, so 1 OCPU = 1 vCPU, while SMT x86 had 1 OCPU = 2 vCPU (threads). But then, possibly thinking that their users are getting comfortable with it, they threw a wrench by making 1 OCPU for newer (still non-SMT) ARM types A2 and A4 be equal to 2 vCPU / 2 full Cores. I can't think of a reason for this other than to confuse their customers.&lt;/p&gt;
&lt;h3&gt;
  
  
  Akamai (Linode)
&lt;/h3&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Instance Type&lt;/th&gt;
&lt;th&gt;CPU type&lt;/th&gt;
&lt;th&gt;RAM/SSD&lt;/th&gt;
&lt;th&gt;Price $/Month&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Linode 4GB* (M)&lt;/td&gt;
&lt;td&gt;AMD Milan&lt;/td&gt;
&lt;td&gt;4/80&lt;/td&gt;
&lt;td&gt;24.00&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;G7 4x2 (M)&lt;/td&gt;
&lt;td&gt;AMD Milan&lt;/td&gt;
&lt;td&gt;4/80&lt;/td&gt;
&lt;td&gt;43.00&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;G8 4x2 (T)&lt;/td&gt;
&lt;td&gt;AMD Turin&lt;/td&gt;
&lt;td&gt;4/40&lt;/td&gt;
&lt;td&gt;45.00&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;&lt;sup&gt;* &lt;em&gt;Shared core.&lt;/em&gt;&lt;/sup&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://cloud.linode.com/" rel="noopener noreferrer"&gt;Linode&lt;/a&gt;, the venerable cloud provider (predating AWS by several years), has now been part of &lt;a href="https://www.akamai.com/solutions/cloud-computing" rel="noopener noreferrer"&gt;Akamai&lt;/a&gt; for a few years.&lt;/p&gt;

&lt;p&gt;From the previous years we saw that their shared core types ("Linodes") are the best bang for buck, but it depends on what CPU you are assigned on creation. It seems that currently the most common configuration features an AMD &lt;strong&gt;EPYC Milan&lt;/strong&gt;. I tried to build quite a few and that's what you usually get (if you manage to build an ancient Intel or AMD Rome, try again), I did not see any newer CPUs pop up. The latest &lt;strong&gt;EPYC Turin&lt;/strong&gt; though is available as a dedicated CPU instance. They now mark dedicated instances with their generation, so a &lt;strong&gt;G8&lt;/strong&gt; should always be the same CPU. As always, the dedicated instances come with SMT, so you are normally getting a core per 2 vCPUs, while the shared instances are virtual cores, so twice the vCPUs gives you twice the multi-thread performance - the caveat is that performance per thread varies depending on how busy the node that holds your VM is.&lt;/p&gt;

&lt;p&gt;It is a bit of an annoyance that without testing your VM after creation you can't be sure of what performance to expect, unless you go for the more expensive dedicated VMs, but otherwise, Akamai/Linode is still easy to set up and maintain and has fixed, simple pricing across regions.&lt;/p&gt;
&lt;h3&gt;
  
  
  DigitalOcean
&lt;/h3&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Instance Type&lt;/th&gt;
&lt;th&gt;CPU type&lt;/th&gt;
&lt;th&gt;RAM/SSD&lt;/th&gt;
&lt;th&gt;Price $/Month&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Basic 2/4* (B)&lt;/td&gt;
&lt;td&gt;Intel Broadwell&lt;/td&gt;
&lt;td&gt;4/80&lt;/td&gt;
&lt;td&gt;24.00&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;PremInt 2/4* (C)&lt;/td&gt;
&lt;td&gt;Intel Cascade L&lt;/td&gt;
&lt;td&gt;4/120&lt;/td&gt;
&lt;td&gt;32.00&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;PremAMD 2/4* (R)&lt;/td&gt;
&lt;td&gt;AMD Rome&lt;/td&gt;
&lt;td&gt;4/80&lt;/td&gt;
&lt;td&gt;28.00&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;&lt;sup&gt;* &lt;em&gt;Shared core.&lt;/em&gt;&lt;/sup&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://m.do.co/c/e23449764852" rel="noopener noreferrer"&gt;DigitalOcean&lt;/a&gt; was close to the top of the perf/value charts a few years ago, providing the best value with their shared CPU &lt;strong&gt;Basic&lt;/strong&gt; "droplets". I am actually using DigitalOcean droplets to help out by hosting a free weather service called &lt;a href="https://7timer.info/" rel="noopener noreferrer"&gt;7Timer&lt;/a&gt;, so feel free to use my &lt;a href="https://m.do.co/c/e23449764852" rel="noopener noreferrer"&gt;affiliate link&lt;/a&gt; to sign up and get $200 free - you will help with the free project's hosting costs if you end up using the service beyond the free period. Apart from value, I chose them for the simplicity of setup, deployment, snapshots, backups.&lt;/p&gt;

&lt;p&gt;However, they seem to have stopped upgrading their fleet for quite a while now, so you end up with some very old CPUs. If you don't mind the low per-thread performance, they are still not a bad value, given the low prices. I like their simple, region-independent and stable pricing structure, but I wish they would upgrade their shared core data centers.&lt;/p&gt;
&lt;h3&gt;
  
  
  Hetzner
&lt;/h3&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Instance Type&lt;/th&gt;
&lt;th&gt;CPU type&lt;/th&gt;
&lt;th&gt;RAM/SSD&lt;/th&gt;
&lt;th&gt;Price $/Month&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;CCX13 (M)&lt;/td&gt;
&lt;td&gt;AMD Milan&lt;/td&gt;
&lt;td&gt;8/80&lt;/td&gt;
&lt;td&gt;17.27&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;CAX11 (A**)*&lt;/td&gt;
&lt;td&gt;Ampere Altra&lt;/td&gt;
&lt;td&gt;4/40&lt;/td&gt;
&lt;td&gt;5.46&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;CPX22 (G**)*&lt;/td&gt;
&lt;td&gt;AMD Genoa&lt;/td&gt;
&lt;td&gt;4/80&lt;/td&gt;
&lt;td&gt;8.63&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;CX23 (R**)*&lt;/td&gt;
&lt;td&gt;AMD Rome&lt;/td&gt;
&lt;td&gt;4/40&lt;/td&gt;
&lt;td&gt;4.31&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;CX23 (S**)*&lt;/td&gt;
&lt;td&gt;Intel Skylake&lt;/td&gt;
&lt;td&gt;4/40&lt;/td&gt;
&lt;td&gt;4.31&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;&lt;sup&gt;* &lt;em&gt;Limited/Eu-only availability.&lt;/em&gt;&lt;/sup&gt;&lt;br&gt;
&lt;sup&gt;** &lt;em&gt;Shared core.&lt;/em&gt;&lt;/sup&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.hetzner.com/cloud/" rel="noopener noreferrer"&gt;Hetzner&lt;/a&gt; is a quite old German data center operator and web host, with a very budget-friendly public cloud offering. They are often recommended as a reliable extra-low-budget solution, and I've had much better luck with them than other similar providers.&lt;/p&gt;

&lt;p&gt;On the surface, their prices seem to be just a fraction of those of the larger providers, so I did extended benchmark runs over days to make sure there is no significant oversubscribing - except perhaps the cheapest variant (&lt;strong&gt;CX23&lt;/strong&gt;). Only the &lt;strong&gt;CCX13&lt;/strong&gt; claims dedicated cores. Ironically, those dedicated instances vary significantly in performance depending on which data center you create them in. In the end, the &lt;strong&gt;CPX22&lt;/strong&gt; (AMD) and &lt;strong&gt;CAX11&lt;/strong&gt; (ARM) shared core instances are the most stable in performance across instances and regions.&lt;/p&gt;

&lt;p&gt;Note that the cheap shared-core types are not widely available, not found in the US regions and they even show no availability at times in the European regions. And while I included a &lt;strong&gt;CX23&lt;/strong&gt; with &lt;strong&gt;EPYC Rome&lt;/strong&gt;, you will normally get a slower &lt;strong&gt;Skylake&lt;/strong&gt;. I will not include the shared instances in the price/performance charts this time around, as I am thinking that the limited availability does not make them equal contenders.&lt;/p&gt;
&lt;h2&gt;
  
  
  Test setup &amp;amp; Benchmarking methodology
&lt;/h2&gt;

&lt;p&gt;In order to have much more test runs, I streamlined the test suite into a docker image &lt;a href="https://hub.docker.com/r/dkechag/cloud-bench" rel="noopener noreferrer"&gt;which you can run yourself&lt;/a&gt;. Almost all instances were on 64bit &lt;strong&gt;Debian 13&lt;/strong&gt;, although I had to use &lt;strong&gt;Ubuntu 24.04&lt;/strong&gt; on a couple, and Oracle's ARM were only compatible with &lt;strong&gt;Oracle Linux&lt;/strong&gt;. To run the entire suite on a system with docker you would do:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker run -it --rm dkechag/cloud-bench
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The suite comprises of:&lt;/p&gt;

&lt;h3&gt;
  
  
  Benchmark::DKbench
&lt;/h3&gt;

&lt;p&gt;As every year, the main weight is on my &lt;a href="https://metacpan.org/pod/Benchmark::DKbench" rel="noopener noreferrer"&gt;my own benchmark suite&lt;/a&gt;, which you can now also run on its &lt;a href="https://hub.docker.com/r/dkechag/dkbench" rel="noopener noreferrer"&gt;own docker image&lt;/a&gt;. It has both proven very good at approximating real-world performance differences in the type of workloads we use &lt;a href="https://www.spareroom.co.uk" rel="noopener noreferrer"&gt;at SpareRoom&lt;/a&gt;, and is also good at comparing single and multi-threaded performance (with scaling to hundreds of threads if needed). To run DKbench by itself on a system with docker:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker run -it --rm dkechag/dkbench
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;I created multiple instances in different regions and recorded min and max of all runs (both single-thread and dual-thread).&lt;/p&gt;

&lt;h3&gt;
  
  
  Geekbench 5
&lt;/h3&gt;

&lt;p&gt;I have kept Geekbench, both because it can help you compare results from previous years and because &lt;strong&gt;Geekbench 6&lt;/strong&gt; seems to be much worse - especially in multi-threaded testing (I'd go as far to say &lt;a href="https://dev.to/dkechag/how-geekbench-6-multicore-is-broken-by-design-44ig"&gt;it looks broken to me&lt;/a&gt;).&lt;/p&gt;

&lt;p&gt;I simply kept the best of 2 runs, you can browse the results &lt;a href="https://browser.geekbench.com/user/ecuador" rel="noopener noreferrer"&gt;here&lt;/a&gt;. There's an Arm version too at &lt;a href="https://cdn.geekbench.com/Geekbench-5.4.0-LinuxARMPreview.tar.gz" rel="noopener noreferrer"&gt;https://cdn.geekbench.com/Geekbench-5.4.0-LinuxARMPreview.tar.gz&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Phoronix (openbenchmarking.org)
&lt;/h3&gt;

&lt;p&gt;Apart from being popular, Phoronix benchmarks can help benchmark some specific things (e.g. AVX512 extensions) and also results are &lt;a href="//openbenchmarking.org"&gt;openly available&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;I ran the following benchmarks:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;7-zip&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;phoronix-test-suite benchmark compress-7zip
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Very common application and very common benchmark - average compression/decompression scores are recorded.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;nginx 3.0 100 connections&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;phoronix-test-suite benchmark nginx
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Select option 3.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Openssl (RSA 4096bit)&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;phoronix-test-suite benchmark openssl
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Select option 1. This benchmark uses SSE/AVX up to AVX512, which might be important for some people. Older CPUs that lack the latest extensions are at a disadvantage.&lt;/p&gt;

&lt;h3&gt;
  
  
  FFmpeg / libx264
&lt;/h3&gt;

&lt;p&gt;Blender's &lt;a href="https://peach.blender.org/" rel="noopener noreferrer"&gt;Big Buck Bunny&lt;/a&gt; video was transcoded to an H264 mp4 via FFmpeg, both in single and dual-thread mode.&lt;/p&gt;

&lt;h2&gt;
  
  
  Results
&lt;/h2&gt;

&lt;p&gt;The raw results can be accessed &lt;a href="https://docs.google.com/spreadsheets/d/e/2PACX-1vTy4d10zpOxD-VRaSp5Z9-EoljTOaIs8RBCQFSKsIFNwvG2Q2ogHznMIfghm5OwC_A_Abwi-2g74Fkc/pubhtml" rel="noopener noreferrer"&gt;on this spreadsheet&lt;/a&gt; (or &lt;a href="https://browser.geekbench.com/user/ecuador" rel="noopener noreferrer"&gt;here&lt;/a&gt; for the full Geekbench results).&lt;/p&gt;

&lt;p&gt;In the graphs that follow, the y-axis lists the names of the instances, with the CPU type in parenthesis:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;(GR) = Intel Granite Rapids
(E)  = Intel Emerald Rapids
(SR) = Intel Sapphire Rapids
(I)  = Intel Ice Lake/Cooper Lake
(C)  = Intel Cascade Lake
(S)  = Intel Skylake
(B)  = Intel Broadwell
(T)  = AMD Turin
(G)  = AMD Genoa
(M)  = AMD Milan
(R)  = AMD Rome
(G4) = Amazon Graviton4
(G3) = Amazon Graviton3
(G2) = Amazon Graviton2
(CO) = Azure Cobalt 100
(AM) = Ampere AmpereOne M
(AO) = Ampere AmpereOne
(A)  = Ampere Altra
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Single-thread Performance
&lt;/h3&gt;

&lt;p&gt;Single-thread performance can be crucial for many workloads. If you have highly parallelizable tasks you can add more vCPUs to your deployment, but there are many common types of tasks where that is not always a solution. For example, a web server can be scaled to service any number of requests in parallel, however the vCPU's thread speed determines the minimum response time of each request.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;DKbench Single-Threaded Performance&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;We start with the latest DKbench, running the 19 default benchmarks (Perl &amp;amp; C/XS) which cover a variety of common server workloads. I tried to build 2-3 instances at different times across at least 3 regions (if the provider allowed), to get a min/max range of performance. Here are the results for single thread:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3tkstqiazoc8zuf4m25g.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3tkstqiazoc8zuf4m25g.png" alt="Bar Chart comparing single-threaded performance" width="800" height="1362"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I think it's the first time in my series of comparisons where a CPU had this clear of a performance lead. AMD's &lt;strong&gt;EPYC Turin&lt;/strong&gt; is simply a tier above anything else. &lt;strong&gt;AWS&lt;/strong&gt; has the fastest setup with that CPU, while &lt;strong&gt;GCP&lt;/strong&gt;’s more expensive &lt;strong&gt;C4d&lt;/strong&gt; seems to vary a lot in performance when their own, cheaper &lt;strong&gt;N4d&lt;/strong&gt; gave more consistent results. Overall, if you are looking for maximum performance per thread, &lt;strong&gt;EPYC Turin&lt;/strong&gt; seems to be the answer if your cloud provider has it.&lt;/p&gt;

&lt;p&gt;In the 2024 comparison Intel &lt;strong&gt;Emerald Rapids&lt;/strong&gt; did quite well, but it turns out that is only on non-busy nodes, where the cpu allows for a generous boost - at least for &lt;strong&gt;GCP&lt;/strong&gt;. This is reflected as the range you see on the graph. The new &lt;strong&gt;Granite Rapids&lt;/strong&gt; seems to fix this, providing a bit higher, but mainly more stable performance. So, a solid step forward from Intel, it's just that &lt;strong&gt;Turin&lt;/strong&gt; is really impressive.&lt;/p&gt;

&lt;p&gt;As we are waiting for &lt;strong&gt;AWS&lt;/strong&gt; to release &lt;strong&gt;Graviton5&lt;/strong&gt; publicly, &lt;strong&gt;GCP&lt;/strong&gt;’s &lt;strong&gt;Axion&lt;/strong&gt; is the leader for ARM solutions, impressively offering &lt;strong&gt;EPYC Genoa&lt;/strong&gt;-level performance per thread. I tested Azure's own &lt;strong&gt;Cobalt 100&lt;/strong&gt; for the first time - it sits between &lt;strong&gt;Graviton3&lt;/strong&gt; and &lt;strong&gt;Graviton4&lt;/strong&gt; performance. Ampere's new &lt;strong&gt;AmpereOne M&lt;/strong&gt; finally offers some tangible improvement over the aging &lt;strong&gt;Altra&lt;/strong&gt;, but only matches &lt;strong&gt;AWS&lt;/strong&gt;'s older &lt;strong&gt;Graviton3&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Lastly, among the lower-cost providers, &lt;strong&gt;DigitalOcean&lt;/strong&gt; has lagged behind in performance, signaling that their fleet is due for an upgrade. Both &lt;strong&gt;Akamai&lt;/strong&gt; and &lt;strong&gt;Hetzner&lt;/strong&gt; offer some fast &lt;strong&gt;Milan&lt;/strong&gt; instances, although for both providers you are not guaranteed what performance level you are going to get when creating an instance - there is the variation shown in the chart. It's not oversubscribing, the performance is stable, it's just that groups of servers are setup differently.&lt;/p&gt;

&lt;h3&gt;
  
  
  Multi-thread Performance &amp;amp; Scalability
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Scalability&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;DKbench runs the benchmark suite single-threaded and multi-threaded (2 threads in this comparison as we use 2x vCPU instances) and calculates a scalability percentage. The benchmark obviously uses highly parallelizable workloads (if that's not what you are running, you'd have to rely more on the single-thread benchmarking). In the following graph 100% scalability means that if you run 2 parallel threads, they will both run at 100% speed compared to how they would run in isolation. For systems where each vCPU is 1 core (e.g. all ARM systems), or for "shared" CPU systems where each vCPU is a thread among a shared pool, you should expect scalability near 100% - what is running on one vCPU should not affect the other when it comes to CPU-only workloads.&lt;/p&gt;

&lt;p&gt;Most Intel/AMD systems though give you a single core that has 2x threads (Hyper-Threads / HT in Intel lingo - or Simultaneous Multi Threads / SMT if you prefer) as a 2x vCPU unit. Those will give you scalability well below 100%. A 50% scalability would mean you have the equivalent of just 1x vCPU, which would be very disappointing. Hence, the farther up you are from 50%, the more performance your 2x vCPUs give you over running on a single vCPU.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx86k7r4dwsmiib6qsfpg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx86k7r4dwsmiib6qsfpg.png" alt="Bar Chart comparing scalability" width="800" height="1362"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As expected, the ARM and shared CPUs are near 100%, i.e. you are getting twice the multithreaded performance going from 1x to 2x vCPUs. You also get that from three x86 types: &lt;strong&gt;AWS's&lt;/strong&gt; &lt;strong&gt;Genoa C7a&lt;/strong&gt; and &lt;strong&gt;Turin C8a&lt;/strong&gt; alongside &lt;strong&gt;GCP's&lt;/strong&gt; older &lt;strong&gt;Milan t2d&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;From the rest we note that, traditionally, AMD does SMT better than Intel, although the latter has improved from the dismal &lt;strong&gt;Ice Lake&lt;/strong&gt; days when it barely managed over 50%.&lt;/p&gt;

&lt;p&gt;Bizarrely, the &lt;strong&gt;Akamai AMD Turin&lt;/strong&gt; give an unusually high (given SMT) scalability of 71.9%. I have verified the result several times, and I can't figure out what their setup is - the single-threaded performance at the same time is very low compared to every other &lt;strong&gt;Turin&lt;/strong&gt;.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;DKbench Multi-Threaded Performance&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;From the single-thread performance and scalability results we can guess how running DKbench multithreaded will turn out, but in any case here it is:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foc9ban68sechvm7ywre2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foc9ban68sechvm7ywre2.png" alt="Bar Chart comparing multi-threaded performance" width="800" height="1362"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Give the clearly fastest instance two full cores instead of threads and you get the &lt;strong&gt;Turin&lt;/strong&gt;-powered &lt;strong&gt;AWS C8a&lt;/strong&gt; completely dominating the chart. Interestingly, the &lt;strong&gt;Google Axion&lt;/strong&gt; seems at least as good here as the leader from the previous comparison, the &lt;strong&gt;Genoa C7a&lt;/strong&gt; - with &lt;strong&gt;Graviton4&lt;/strong&gt; very close and &lt;strong&gt;Cobalt 100&lt;/strong&gt; trailing not far behind.&lt;/p&gt;

&lt;p&gt;The SMT-enabled &lt;strong&gt;Turin&lt;/strong&gt; instances follow, with the Top-10 completing with the venerable &lt;strong&gt;Milan&lt;/strong&gt; in a non-SMT &lt;strong&gt;Tau&lt;/strong&gt; instance. Long time followers of these comparisons may remember this was the top of the chart in the &lt;a href="https://dev.to/dkechag/cloud-vm-performance-value-comparison-2023-perl-more-1kpp"&gt;2023 edition&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;At the bottom, as expected we have very old Intel &lt;strong&gt;Broadwell/Skylake&lt;/strong&gt; not-as-old &lt;strong&gt;Ice Lake&lt;/strong&gt; and AMD &lt;strong&gt;Rome&lt;/strong&gt;.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Geekbench 5 Score&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The old Geekbench 5 is provided for comparison reasons (and &lt;a href="https://dev.to/dkechag/how-geekbench-6-multicore-is-broken-by-design-44ig"&gt;I don't trust Geekbench 6&lt;/a&gt;):&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvlospeaqaoismxjeehtg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvlospeaqaoismxjeehtg.png" alt="Bar Chart comparing Geekbench 5 performance" width="800" height="1362"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Both for single and multi-core, the results are very close to what we get with DKbench. Which is a good thing, as both suites try a range of benchmarks to get a balanced generic CPU score.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;7zip&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Moving on to some popular specific benchmarks - starting with 7zip which is sensitive to memory latency and cache:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6355h7vqjhr1ym1cifv2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6355h7vqjhr1ym1cifv2.png" alt="Bar Chart comparing 7zip performance" width="800" height="1362"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;While &lt;strong&gt;Turin&lt;/strong&gt; still leads overall, &lt;strong&gt;Axion&lt;/strong&gt; and &lt;strong&gt;Graviton4&lt;/strong&gt; are impressive and actually even beat it in the decompress part of the benchmark. In fact, &lt;strong&gt;Cobalt 100&lt;/strong&gt; is the top performer for decompression, but overall the ARM solutions show great performance.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;NGINX web server&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Results from the 100 connections benchmark:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkx66ymebvas7fhk1ok8y.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkx66ymebvas7fhk1ok8y.png" alt="Bar Chart comparing NGINX performance" width="800" height="1362"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Another &lt;strong&gt;Turin&lt;/strong&gt; showcase, with the non-SMT &lt;strong&gt;AWS C8a&lt;/strong&gt; in particular almost doubling the score of the second and tripling the score of the &lt;strong&gt;C7a&lt;/strong&gt;. &lt;strong&gt;Granite Rapids&lt;/strong&gt; is also making a great showing.&lt;/p&gt;

&lt;p&gt;It's the first time I am running this popular benchmark, and I am a bit puzzled about some of the &lt;strong&gt;Milan&lt;/strong&gt; types coming last.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;FFmpeg H264 Video Compression&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Another first for this comparison is video compression using FFmpeg and libx264. Results for both single and dual-thread mode:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe9px3ultzdif361lg408.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe9px3ultzdif361lg408.png" alt="Bar Chart comparing FFmpeg performance" width="800" height="1362"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once more, &lt;strong&gt;EPYC Turin&lt;/strong&gt; comes first. If we look at single-thread performance only &lt;strong&gt;Granite Rapids&lt;/strong&gt; comes somewhat close. When using 2 full cores &lt;strong&gt;Axion&lt;/strong&gt; can pull ahead of all SMT (i.e. single core) instances except &lt;strong&gt;Turin&lt;/strong&gt;.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;OpenSSL RSA4096 (AVX512)&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Lastly, in case you have software that can be accelerated by AVX512, I am including an OpenSSL RSA4096 benchmark. They are Intel's extensions so they are on all their CPUs since &lt;strong&gt;Skylake&lt;/strong&gt;, whereas &lt;strong&gt;Genoa&lt;/strong&gt; was the first AMD CPU to implement them. Older AMD CPUs and ARM architectures will be at a disadvantage in this benchmark:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1l4lca3btg9xzgkuhs6p.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1l4lca3btg9xzgkuhs6p.png" alt="Bar Chart comparing OpenSSL performance" width="800" height="1362"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Like in our previous comparison, AMD outperforms Intel at their own game. It's quite a margin for &lt;strong&gt;Turin&lt;/strong&gt; and even &lt;strong&gt;Genoa&lt;/strong&gt; is ahead of anything Intel. Intel does not seem to be prioritising vector performance, as even the latest &lt;strong&gt;Granite Rapids&lt;/strong&gt; does not bring much improvement over the aging &lt;strong&gt;Ice Lake&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;As expected, ARM and older AMD CPUs that don't support AVX512 are slower than Intel &lt;strong&gt;Skylake&lt;/strong&gt; and newer.&lt;/p&gt;

&lt;h3&gt;
  
  
  Performance / Price (On Demand / Pay As You Go)
&lt;/h3&gt;

&lt;p&gt;One factor that is often even more important than performance itself is the performance-to-price ratio. &lt;/p&gt;

&lt;p&gt;I will start with the "on-demand" price quoted by every provider. While I listed monthly costs on the tables, these prices are actually charged per minute or hour, so there's no need to reserve for a full month.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;DKbench single-thread performance/price&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The first chart is for single-thread performance/price. I will have to separate Hetzner's shared instances because they are not available in the US and sometimes run out even in Europe (esp. &lt;strong&gt;CX23&lt;/strong&gt;), so I feel they are not exact competition - &lt;strong&gt;CCX13&lt;/strong&gt; though is available and is included.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs1l09dc02hj2n4pupg8x.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs1l09dc02hj2n4pupg8x.png" alt="Bar Chart comparing single-threaded performance/price" width="800" height="1362"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Hetzner&lt;/strong&gt; and &lt;strong&gt;Oracle&lt;/strong&gt; top the list like last year. However, thanks to the incredible performance of &lt;strong&gt;Turin&lt;/strong&gt;, &lt;strong&gt;Oracle&lt;/strong&gt; pretty much matches &lt;strong&gt;Hetzner's&lt;/strong&gt; dedicated instance in performance to cost. They are followed by &lt;strong&gt;Linode&lt;/strong&gt; and also &lt;strong&gt;GCP's n4d&lt;/strong&gt;. The latter, again thanks to the leading single-thread performance of AMD's latest CPU even manages to bring better value than &lt;strong&gt;DigitalOcean&lt;/strong&gt;, which is then followed by in-house ARM solutions like &lt;strong&gt;Google Axion&lt;/strong&gt; and &lt;strong&gt;Azure Cobalt 100&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;AWS&lt;/strong&gt; is definitely the worst value on-demand. Their &lt;strong&gt;Turin&lt;/strong&gt; is the best they can do, while their previous gen and older CPUs are the worst values on the table. Unlike the previous comparison, even &lt;strong&gt;Azure&lt;/strong&gt; seems to do better in value.&lt;/p&gt;

&lt;p&gt;At this point I think we should see the limited availability &lt;strong&gt;Hetzner&lt;/strong&gt; VMs in comparison to the best value dedicated:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fezrx436emvt91uu4goa7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fezrx436emvt91uu4goa7.png" alt="Bar Chart comparing Hetzner single-threaded performance/price" width="800" height="241"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The inexpensive shared-cpu types offer unbeatable value - if you manage to get them. The top one overall (&lt;strong&gt;Rome CX23&lt;/strong&gt;) is actually the hardest to provision, as the &lt;strong&gt;CX23&lt;/strong&gt; type usually gives you a slow &lt;strong&gt;Skylake&lt;/strong&gt;.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;DKbench multi-thread performance/price&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Moving on to 2x threads for evaluating multi-threaded performance:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhahthezmlwxaafuqx03s.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhahthezmlwxaafuqx03s.png" alt="Bar Chart comparing multi-threaded performance/price" width="800" height="1362"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;All the non-SMT VMs get a bump here, hence &lt;strong&gt;Oracle&lt;/strong&gt;'s ARM take the lead with the new &lt;strong&gt;AmpereOne M&lt;/strong&gt;, with &lt;strong&gt;Hetzner&lt;/strong&gt; and shared core &lt;strong&gt;Linode&lt;/strong&gt; following closely. The second tier consists of &lt;strong&gt;Google Axion&lt;/strong&gt; and &lt;strong&gt;Azure Cobalt 100&lt;/strong&gt;, as well as &lt;strong&gt;DigitalOcean&lt;/strong&gt; droplets. &lt;strong&gt;AWS's&lt;/strong&gt; non-SMT &lt;strong&gt;Turin&lt;/strong&gt; is not that far behind this time, although their older gen 5/6 x86 are again at the very bottom of the chart.&lt;/p&gt;

&lt;p&gt;The &lt;strong&gt;Hetzner&lt;/strong&gt; shared-core instances get the bump as well, they provide superb on-demand value compared to the competition:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmmiipujq0u0fa4f52fgn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmmiipujq0u0fa4f52fgn.png" alt="Bar Chart comparing Hetzner multi-threaded performance/price" width="800" height="241"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Performance / Price (1-Year reserved)
&lt;/h3&gt;

&lt;p&gt;The three largest (and most expensive) providers offer significant 1-year reservation discounts. To get the maximum discount you have to lock into a specific VM type, which is why it is extra important to know what you are getting out of each. Also, for &lt;strong&gt;AWS&lt;/strong&gt; you can actually automatically apply the 1 year prices to most on-demand instances by using third party services like &lt;a href="https://www.doit.com/flexsave/" rel="noopener noreferrer"&gt;DoIT's Flexsave&lt;/a&gt; (included in their free tier!), so this segment may still be relevant even if you don't want to reserve.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;DKbench single-thread performance/price (1Y)&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The first chart is again for single-thread performance/price. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwccot1ws9h9j9bbj1068.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwccot1ws9h9j9bbj1068.png" alt="Bar Chart comparing 1Y single-threaded performance/price" width="800" height="1362"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The 1-year discount is enough for &lt;strong&gt;GCP&lt;/strong&gt;’s &lt;strong&gt;Turin&lt;/strong&gt; to match &lt;strong&gt;Oracle&lt;/strong&gt; near the top of the value ranking. On &lt;strong&gt;Azure&lt;/strong&gt; you get some good value running &lt;strong&gt;Cobalt 100&lt;/strong&gt; or &lt;strong&gt;Genoa&lt;/strong&gt;. If you are on &lt;strong&gt;AWS&lt;/strong&gt; your best bet are the latest &lt;strong&gt;C8&lt;/strong&gt; family.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;DKbench multi-thread performance/price (1Y)&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Moving on to evaluating multi-threaded performance using 2x vCPUs:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8twn8sdb3olybbdzqn33.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8twn8sdb3olybbdzqn33.png" alt="Bar Chart comparing 1Y multi-threaded performance/price" width="800" height="1362"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;OCI&lt;/strong&gt; ARM instances are still at the top, joined by &lt;strong&gt;Azure Cobalt 100&lt;/strong&gt; with &lt;strong&gt;Axion&lt;/strong&gt; almost keeping up. This is the first instance where &lt;strong&gt;AWS&lt;/strong&gt; can offer similar value, thanks to the &lt;strong&gt;C8a&lt;/strong&gt; with the fast &lt;strong&gt;Turin&lt;/strong&gt; offering twice the physical cores, making up for the higher price.&lt;/p&gt;

&lt;h3&gt;
  
  
  Performance / Price (3-Year reserved)
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;DKbench single-thread performance/price (3Y)&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Finally, for very long term commitments, &lt;strong&gt;AWS&lt;/strong&gt;, &lt;strong&gt;GCP&lt;/strong&gt; and &lt;strong&gt;Azure&lt;/strong&gt; provide 3-year reserved discounts:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3r2akqt9nhb4yvlqgvze.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3r2akqt9nhb4yvlqgvze.png" alt="Bar Chart comparing 3Y single-threaded performance/price" width="800" height="1362"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;GCP&lt;/strong&gt; with its &lt;strong&gt;Turin&lt;/strong&gt; instances finally comes just ahead of &lt;strong&gt;Oracle&lt;/strong&gt; and even  &lt;strong&gt;Hetzner's&lt;/strong&gt; dedicated VM. &lt;strong&gt;Azure&lt;/strong&gt; also provide good value with their &lt;strong&gt;Cobalt 100&lt;/strong&gt; and &lt;strong&gt;Turin&lt;/strong&gt; types. It should be noted that even if &lt;strong&gt;AWS&lt;/strong&gt; lags behind the other, at a 3 year commitment it still offers better value than the "classic" value providers &lt;strong&gt;Akamai&lt;/strong&gt; and &lt;strong&gt;DigitalOcean&lt;/strong&gt;.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Multi-thread performance/price (3Y)&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Switching to multi-thread, the number of physical cores per vCPU makes the difference:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxmi0rw6r2pckeb1g92s2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxmi0rw6r2pckeb1g92s2.png" alt="Bar Chart comparing 3Y multi-threaded performance/price" width="800" height="1362"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I didn’t expect this, but &lt;strong&gt;Azure Cobalt 100&lt;/strong&gt; tops the chart! It is followed by &lt;strong&gt;GCP&lt;/strong&gt; and &lt;strong&gt;OCI&lt;/strong&gt; ARM solutions, but &lt;strong&gt;AWS's&lt;/strong&gt; and &lt;strong&gt;GCP's&lt;/strong&gt; &lt;strong&gt;Turin&lt;/strong&gt; are not far behind.&lt;/p&gt;

&lt;h3&gt;
  
  
  Performance / Price (Spot / Preemptible VMs)
&lt;/h3&gt;

&lt;p&gt;The large providers (&lt;strong&gt;AWS&lt;/strong&gt;, &lt;strong&gt;GCP&lt;/strong&gt;, &lt;strong&gt;Azure&lt;/strong&gt;, &lt;strong&gt;OCI&lt;/strong&gt;) offer their spare VM capacity at an - often heavy - discount, with the understanding that these instances can be reclaimed at any time when needed by other customers. This "spot" or "preemptible" VM instance pricing is by far the most cost-effective way to add compute to your cloud. Obviously, it is not applicable to all use cases, but if you have a fault-tolerant workload or can gracefully interrupt your processing and rebuild your server to continue, this might be for you.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;AWS&lt;/strong&gt; and &lt;strong&gt;OCI&lt;/strong&gt; will give you a 2-minute warning before your instance is terminated. &lt;strong&gt;Azure&lt;/strong&gt; and &lt;strong&gt;GCP&lt;/strong&gt; will give you 30 seconds, which should still be enough for many use cases (e.g. web servers, batch processing etc).&lt;/p&gt;

&lt;p&gt;The discount for &lt;strong&gt;Oracle's&lt;/strong&gt; instances is fixed at 50%, but varies wildly for the other providers per region and can change often, so you have to be on top of it to adjust you instance types accordingly.&lt;/p&gt;

&lt;p&gt;For a longer discussion on spot instances see &lt;a href="https://dev.to/dkechag/cloud-provider-price-performance-comparison-spot-vms-for-max-value-jpm"&gt;2023's spot performance/price comparison&lt;/a&gt;. Then you can come back to this year's results below.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;DKbench single-thread performance/price (Spot)&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Applying the lowest January 2026 US spot prices we get:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F628lc9zvmxwgv6w4opcx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F628lc9zvmxwgv6w4opcx.png" alt="Bar Chart comparing Spot single-threaded performance/price" width="800" height="1362"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We get that &lt;strong&gt;Oracle's Turin&lt;/strong&gt; will always be top value, as it has a fixed spot price. From the big 3, &lt;strong&gt;GCP&lt;/strong&gt; and &lt;strong&gt;Azure&lt;/strong&gt; offer the deepest discounts (&lt;strong&gt;Genoa&lt;/strong&gt; and &lt;strong&gt;Cobalt 100&lt;/strong&gt; types), the former getting top place here. If you compare to the 3 year reservation chart, you are getting about twice the performance per dollar. &lt;strong&gt;AWS&lt;/strong&gt; is much less generous, if you are on their cloud, &lt;strong&gt;Turin&lt;/strong&gt; is once more your best bet. But even with &lt;strong&gt;AWS&lt;/strong&gt; you are getting better value if you are using spot instances than other low cost providers.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Multi-thread performance/price (Spot)&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The multi-thread chart:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fde4xnw2wgwnkx8fln8ys.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fde4xnw2wgwnkx8fln8ys.png" alt="Bar Chart comparing Spot multi-threaded performance/price" width="800" height="1362"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Azure's Cobalt 100&lt;/strong&gt; tops the chart with &lt;strong&gt;OCI's AmpereOne M&lt;/strong&gt; following. Interestingly, in third place and best for the &lt;strong&gt;GCP&lt;/strong&gt; cloud, is the aging &lt;strong&gt;t2d Milan&lt;/strong&gt; which was noted as a great value in &lt;a href="https://dev.to/dkechag/cloud-vm-performance-value-comparison-2023-perl-more-1kpp"&gt;previous years&lt;/a&gt;. &lt;strong&gt;AWS&lt;/strong&gt; once more has &lt;strong&gt;Turin&lt;/strong&gt; saving the day by just making it into the top 10. You can get great value with all providers that offer spot instances, but you do have to monitor prices.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusions
&lt;/h2&gt;

&lt;p&gt;As always, I &lt;a href="https://docs.google.com/spreadsheets/d/e/2PACX-1vTy4d10zpOxD-VRaSp5Z9-EoljTOaIs8RBCQFSKsIFNwvG2Q2ogHznMIfghm5OwC_A_Abwi-2g74Fkc/pubhtml" rel="noopener noreferrer"&gt;provide all the data&lt;/a&gt; so you can draw your own conclusions. If you have highly specialized workloads, you may want to rely less on my benchmarks. However, for most users doing general computing, web services, etc. I'd say you are getting a good idea about what to expect from each VM type. In any case, I'll share my own conclusions, some reasonably objective, others perhaps somewhat subjective.&lt;/p&gt;

&lt;h3&gt;
  
  
  Overview
&lt;/h3&gt;

&lt;p&gt;Let’s begin with some quick take-aways, especially for things that are new this year:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;AMD Turin&lt;/strong&gt; is an absolute beast. It's so fast that it's usually a good value even when priced higher.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Intel Granite Rapids&lt;/strong&gt; is a solid step forward, avoiding the inconsistent performance of its predecessor (&lt;strong&gt;Emerald Rapids&lt;/strong&gt;).&lt;/li&gt;
&lt;li&gt;On the ARM side, &lt;strong&gt;Google Axion&lt;/strong&gt; shows up as a genuinely high-performance option, while &lt;strong&gt;Azure Cobalt 100&lt;/strong&gt; and &lt;strong&gt;Ampere AmpereOne M&lt;/strong&gt; add more variety and better value tiers than older ARM offerings.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Hetzner&lt;/strong&gt; offers amazing value, with &lt;strong&gt;Oracle&lt;/strong&gt; following (with a great free tier too).&lt;/li&gt;
&lt;li&gt;From the big-3, &lt;strong&gt;GCP&lt;/strong&gt; and &lt;strong&gt;Azure&lt;/strong&gt; battle it out in price/performance, when &lt;strong&gt;AWS&lt;/strong&gt; is usually more expensive.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  General Tips
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Upgrade to the modern CPU types when possible. Older VMs are slower and tend to be more expensive due to higher operational costs.&lt;/li&gt;
&lt;li&gt;Plan your usage and consider making reservations (3-year preferred) to lower costs where applicable. Remember, you can get free 1y reservation prices through 3rd parties (e.g. &lt;a href="https://www.doit.com/flexsave/" rel="noopener noreferrer"&gt;DoIT&lt;/a&gt;) if you are with &lt;strong&gt;AWS&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Leverage spot VMs as much as possible. They are essentially the only way the cloud can compete with the cost of buying and running your own servers. Check prices periodically and across all regions that interest you to find the best deals.&lt;/li&gt;
&lt;li&gt;Remember that &lt;em&gt;vCPUs&lt;/em&gt; are not always comparable: ARM systems and very few x86 systems like &lt;strong&gt;AWS's C8a/C7a&lt;/strong&gt; and &lt;strong&gt;GCP's t2d&lt;/strong&gt;, provide a full CPU core per vCPU. Most others give you a full core per 2x vCPUs.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Recommendations per use-case
&lt;/h3&gt;

&lt;p&gt;I'll further comment with my picks for various usage scenarios:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Budget solution&lt;/strong&gt;: If &lt;strong&gt;Oracle's&lt;/strong&gt; free tier is not enough for your project, look at &lt;strong&gt;Hetzner&lt;/strong&gt; - especially if you are fine with non-US regions and perhaps limited availability to use the shared-CPU types (preferably &lt;strong&gt;CPX&lt;/strong&gt; or &lt;strong&gt;CAX&lt;/strong&gt;). However, if you can use spot instances, &lt;strong&gt;Azure&lt;/strong&gt; can offer you ARM instances and &lt;strong&gt;Oracle&lt;/strong&gt; and &lt;strong&gt;GCP&lt;/strong&gt; can offer you either ARM or AMD instances with almost comparable performance/price.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Best overall value (performance/price) for non-shared CPU&lt;/strong&gt;: If you can't use spot instances (best value) or reserve for 1+ years, &lt;strong&gt;Oracle&lt;/strong&gt; pretty much tops the performance/price charts with both ARM and x86 (AMD) options. And the charts do not even include the free 4x &lt;strong&gt;A1&lt;/strong&gt; vCPUs that can make a big difference for small projects. If reservations are an option, in the common case of multi-threaded workloads, you can match &lt;strong&gt;OCI&lt;/strong&gt; (or even beat it when committing for 3 years) with &lt;strong&gt;Azure Dpls_v6&lt;/strong&gt;, &lt;strong&gt;GCP c4a/n4d&lt;/strong&gt; or perhaps &lt;strong&gt;AWS C8a&lt;/strong&gt;. If you only care about single threaded performance, look at &lt;strong&gt;GCP n4d/c4d&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Maximum performance&lt;/strong&gt;: It's clear that &lt;strong&gt;Turin&lt;/strong&gt;-powered VMs from all large providers are on a different level to anything else. The best one performance-wise is the non-SMT &lt;strong&gt;AWS C8a&lt;/strong&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Summary per cloud provider
&lt;/h3&gt;

&lt;p&gt;Finally, I'll make some comments per provider. Besides, we can't always pick a provider or switch, so we have to try to work with what's available to us.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;AWS:&lt;/strong&gt; They were the top ARM performers in my previous comparisons, but as &lt;strong&gt;Graviton5&lt;/strong&gt; is still in private beta, &lt;strong&gt;Google Axion&lt;/strong&gt; now leads. However, they offer the fastest compute cloud VMs overall in the &lt;strong&gt;C8a&lt;/strong&gt; with non-SMT &lt;strong&gt;EPYC Turin&lt;/strong&gt;. That VM family's performance makes &lt;strong&gt;AWS's&lt;/strong&gt; traditionally higher prices provide some decent value, especially if you can use spot VMs, or reserve (or use a service like &lt;a href="https://www.doit.com/flexsave/" rel="noopener noreferrer"&gt;Flexsave&lt;/a&gt;). Because &lt;strong&gt;AWS&lt;/strong&gt; does not discount older gens as deeply as others, the &lt;strong&gt;C8a&lt;/strong&gt; is usually the best value even for Spot instances.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;GCP:&lt;/strong&gt; It's pretty clear that you should be on GCP's 4th gen ARM or AMD instances if you care about both performance and value. The &lt;strong&gt;n4d Turin&lt;/strong&gt; is the best bet for most cases - in all my tests so far it performs pretty much the same as &lt;strong&gt;c4d&lt;/strong&gt; at a lower price. Only if your work load can use all cores on your system, the &lt;strong&gt;Axion c4a&lt;/strong&gt; will be a bit better still, but that's about it for standard instances. For spot instances things change per-region and usually previous gen instances will give you the best performance per price. It could be &lt;strong&gt;t2d&lt;/strong&gt; or &lt;strong&gt;c3d&lt;/strong&gt; like in the tested region, but there are also regions where &lt;strong&gt;c2d&lt;/strong&gt; or &lt;strong&gt;n2&lt;/strong&gt; (as long as you set &lt;code&gt;min_cpu_platform="Intel Ice Lake"&lt;/code&gt;) come out ahead.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Azure:&lt;/strong&gt; Who'd have expected Azure developing in-house ARM CPUs, but here we are and &lt;strong&gt;Cobalt 100&lt;/strong&gt; might not be as fast as &lt;strong&gt;Google Axion&lt;/strong&gt;, but it's not far behind and it's offered at competitive prices, so you'll get similar - or possibly better for spot instances - value compared to GCP.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Oracle:&lt;/strong&gt; I definitely recommend Oracle for whoever has small projects where a 4-core ARM VM, which &lt;strong&gt;Oracle&lt;/strong&gt; provides for free, covers most of the requirements. Their non-free instances are also very competitive, with their on-demand prices comparable to 1-3 year reservation prices from the "Big 3" (&lt;strong&gt;Oracle&lt;/strong&gt; doesn't do reservation discounts), with the best value being the &lt;strong&gt;AmpereOne M A4&lt;/strong&gt; and &lt;strong&gt;Turin E6&lt;/strong&gt; for ARM and x86 respectively. The &lt;strong&gt;A4&lt;/strong&gt; are not yet widely available and if you have a free account you will have further limitations as to what VMs you can provision.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Akamai:&lt;/strong&gt; The &lt;em&gt;Linode&lt;/em&gt; shared-CPU instances still offer a good value (second to only Hetzner and Oracle if you only consider on-demand pricing), however you need to build and check each instance to make sure it gets an &lt;strong&gt;EPYC Milan&lt;/strong&gt; (4-digit CPU code ending in &lt;strong&gt;3&lt;/strong&gt;, check it with &lt;code&gt;cat /proc/cpuinfo&lt;/code&gt; on Linux), otherwise you might be paying the same for less. In most of my test instances I would indeed get a &lt;strong&gt;Milan&lt;/strong&gt; though. At least for dedicated instances you now select a "generation", so basically you pick the CPU. They are not as good value as the &lt;strong&gt;Linodes&lt;/strong&gt;, but the &lt;strong&gt;G8&lt;/strong&gt; is the best of them in both performance and value, although it's kind of a bizarre &lt;strong&gt;Turin&lt;/strong&gt; setup where single-thread performance is on a lower tier than any other provider, but SMT gives a  surprisingly (and inexplicable to me) high boost when multi-threading.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;DigitalOcean:&lt;/strong&gt; Although they still provide decent value, they have fallen quite behind &lt;strong&gt;Akamai&lt;/strong&gt;, as there have not been any upgrades. In fact, there is quite a bit of over-subscribing, so performance drops lower than usual. I still use their &lt;strong&gt;Premium AMD&lt;/strong&gt; shared-cpu instances for projects due to the history of reliability I've had with them, but if I want faster servers I have to look elsewhere, which is a shame. They also have the easiest upgrading an instance from one type to another with one click (as long as the target has at least as much disk space). As with the other lower cost providers, you do not know exactly what performance level a VM will have until you provision it. If you'd want to give them a try, feel free to use my &lt;a href="https://m.do.co/c/e23449764852" rel="noopener noreferrer"&gt;affiliate link&lt;/a&gt; to sign up and get $200 in credits while supporting the free weather service &lt;a href="https://www.7timer.info" rel="noopener noreferrer"&gt;7Timer!&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Hetzner:&lt;/strong&gt; I am very suspicious of extreme "budget" cloud providers, but &lt;strong&gt;Hetzner&lt;/strong&gt; has many longtime satisfied users. Their reputation seems quite solid - most complains I've read online are from banning people, usually with good reason. Their prices seemed too good to be true, so I suspected oversubscribing. Well, they don't seem to be worse than the likes of &lt;strong&gt;Akamai&lt;/strong&gt; or &lt;strong&gt;DigitalOcean&lt;/strong&gt;, VMs seemed to perform at reasonably stable performance levels for hours to days at a time, perhaps with the exception of the absolute cheapest, the &lt;strong&gt;CX23&lt;/strong&gt;. Interestingly, the dedicated core &lt;strong&gt;CCX13&lt;/strong&gt; had more performance fluctuation than the shared-core &lt;strong&gt;CPX22&lt;/strong&gt;, as long as that one comes with the fast &lt;strong&gt;EPYC Genoa&lt;/strong&gt; I was seeing when testing. Unfortunately, the &lt;strong&gt;CPX22&lt;/strong&gt; is available only in eu-central and ap-southeast, but if that's OK with you it is the best value and fastest overall.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Finally, remember that choosing a cloud provider involves considering network costs, fluctuating prices, regional requirements, RAM, storage, and other factors that vary between providers. This comparison will only assist with part of your decision.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>googlecloud</category>
      <category>cloud</category>
      <category>devops</category>
    </item>
    <item>
      <title>NOAA::Aurora for Space Weather Forecasts</title>
      <dc:creator>Dimitrios Kechagias</dc:creator>
      <pubDate>Thu, 25 Dec 2025 01:09:01 +0000</pubDate>
      <link>https://forem.com/dkechag/noaaaurora-for-space-weather-forecasts-793</link>
      <guid>https://forem.com/dkechag/noaaaurora-for-space-weather-forecasts-793</guid>
      <description>&lt;p&gt;With the current solar maximum, I wanted to add aurora forecasting features to my iOS weather app, &lt;a href="https://apps.apple.com/us/app/xasteria-astronomy-weather/id985030722" rel="noopener noreferrer"&gt;Xasteria&lt;/a&gt;. Instead of fetching text files from NOAA, I thought it would be nice for my weather proxy server to handle that. Hence I developed &lt;a href="https://metacpan.org/dist/NOAA-Aurora/view/README.pod" rel="noopener noreferrer"&gt;NOAA::Aurora&lt;/a&gt; and released it to CPAN.&lt;/p&gt;

&lt;p&gt;The module provides a convenient interface to the NOAA Space Weather Prediction Center (SWPC), handling data fetching, parsing, and caching.&lt;/p&gt;

&lt;h3&gt;
  
  
  Usage
&lt;/h3&gt;

&lt;p&gt;Simple OO interface, you can get short term aurora probability either as a map:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight perl"&gt;&lt;code&gt;&lt;span class="k"&gt;use&lt;/span&gt; &lt;span class="nn"&gt;NOAA::&lt;/span&gt;&lt;span class="nv"&gt;Aurora&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="k"&gt;my&lt;/span&gt; &lt;span class="nv"&gt;$aurora&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nn"&gt;NOAA::&lt;/span&gt;&lt;span class="nv"&gt;Aurora&lt;/span&gt;&lt;span class="o"&gt;-&amp;gt;&lt;/span&gt;&lt;span class="k"&gt;new&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;

&lt;span class="c1"&gt;# Short-term forecast map for the Northern hemisphere&lt;/span&gt;
&lt;span class="nv"&gt;$aurora&lt;/span&gt;&lt;span class="o"&gt;-&amp;gt;&lt;/span&gt;&lt;span class="nv"&gt;get_image&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;hemisphere&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;north&lt;/span&gt;&lt;span class="p"&gt;',&lt;/span&gt; &lt;span class="s"&gt;output&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;aurora_n.jpg&lt;/span&gt;&lt;span class="p"&gt;');&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6vjucheedo5dcxwlewgi.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6vjucheedo5dcxwlewgi.jpg" alt="aurora forecast" width="800" height="800"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Or as a probability:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight perl"&gt;&lt;code&gt;&lt;span class="c1"&gt;# Get short-term aurora probability for a given location&lt;/span&gt;
&lt;span class="k"&gt;my&lt;/span&gt; &lt;span class="nv"&gt;$probability&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nv"&gt;$aurora&lt;/span&gt;&lt;span class="o"&gt;-&amp;gt;&lt;/span&gt;&lt;span class="nv"&gt;get_probability&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;lat&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="mi"&gt;51&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s"&gt;lon&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can also get a forecast timeseries for the next 3 or 27 days:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight perl"&gt;&lt;code&gt;&lt;span class="c1"&gt;# Let's switch to RFC 3339 format for the timeseries&lt;/span&gt;
&lt;span class="k"&gt;my&lt;/span&gt; &lt;span class="nv"&gt;$aurora&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nn"&gt;NOAA::&lt;/span&gt;&lt;span class="nv"&gt;Aurora&lt;/span&gt;&lt;span class="o"&gt;-&amp;gt;&lt;/span&gt;&lt;span class="k"&gt;new&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;date_format&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;rfc&lt;/span&gt;&lt;span class="p"&gt;');&lt;/span&gt;

&lt;span class="c1"&gt;# Get 3-day kp forecast as a timeseries&lt;/span&gt;
&lt;span class="k"&gt;my&lt;/span&gt; &lt;span class="nv"&gt;$forecast&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nv"&gt;$aurora&lt;/span&gt;&lt;span class="o"&gt;-&amp;gt;&lt;/span&gt;&lt;span class="nv"&gt;get_forecast&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
&lt;span class="c1"&gt;# [&lt;/span&gt;
&lt;span class="c1"&gt;#   {&lt;/span&gt;
&lt;span class="c1"&gt;#     'kp'   =&amp;gt; '3.67',&lt;/span&gt;
&lt;span class="c1"&gt;#     'time' =&amp;gt; '2025-12-25 00:00:00Z'&lt;/span&gt;
&lt;span class="c1"&gt;#   },&lt;/span&gt;
&lt;span class="c1"&gt;#   {&lt;/span&gt;
&lt;span class="c1"&gt;#     'kp'   =&amp;gt; '2.67',&lt;/span&gt;
&lt;span class="c1"&gt;#     'time' =&amp;gt; '2025-12-25 03:00:00Z'&lt;/span&gt;
&lt;span class="c1"&gt;#   },&lt;/span&gt;
&lt;span class="c1"&gt;#   ...&lt;/span&gt;
&lt;span class="c1"&gt;# ]&lt;/span&gt;

&lt;span class="c1"&gt;# Get 27-day outlook as a timeseries&lt;/span&gt;
&lt;span class="k"&gt;my&lt;/span&gt; &lt;span class="nv"&gt;$outlook&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nv"&gt;$aurora&lt;/span&gt;&lt;span class="o"&gt;-&amp;gt;&lt;/span&gt;&lt;span class="nv"&gt;get_outlook&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
&lt;span class="c1"&gt;# [&lt;/span&gt;
&lt;span class="c1"&gt;#   {&lt;/span&gt;
&lt;span class="c1"&gt;#     'ap'   =&amp;gt; '28',&lt;/span&gt;
&lt;span class="c1"&gt;#     'flux' =&amp;gt; '130',&lt;/span&gt;
&lt;span class="c1"&gt;#     'kp'   =&amp;gt; '5',&lt;/span&gt;
&lt;span class="c1"&gt;#     'time' =&amp;gt; '2025-12-22 00:00:00Z'&lt;/span&gt;
&lt;span class="c1"&gt;#   },&lt;/span&gt;
&lt;span class="c1"&gt;#   {&lt;/span&gt;
&lt;span class="c1"&gt;#     'ap'   =&amp;gt; '22',&lt;/span&gt;
&lt;span class="c1"&gt;#     'flux' =&amp;gt; '135',&lt;/span&gt;
&lt;span class="c1"&gt;#     'kp'   =&amp;gt; '5',&lt;/span&gt;
&lt;span class="c1"&gt;#     'time' =&amp;gt; '2025-12-23 00:00:00Z'&lt;/span&gt;
&lt;span class="c1"&gt;#   },&lt;/span&gt;
&lt;span class="c1"&gt;#   ...&lt;/span&gt;
&lt;span class="c1"&gt;# ]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



</description>
      <category>perl</category>
      <category>weather</category>
      <category>cpan</category>
    </item>
    <item>
      <title>Google Cloud SQL: x86 N2 vs ARM C4A</title>
      <dc:creator>Dimitrios Kechagias</dc:creator>
      <pubDate>Sat, 08 Nov 2025 00:24:37 +0000</pubDate>
      <link>https://forem.com/dkechag/google-cloud-sql-x86-n2-vs-arm-c4a-4cga</link>
      <guid>https://forem.com/dkechag/google-cloud-sql-x86-n2-vs-arm-c4a-4cga</guid>
      <description>&lt;p&gt;The &lt;a href="https://dev.to/dkechag/is-cloud-sql-enterprise-plus-worth-it-evaluating-the-mysql-upgrade-at-spareroom-2m1p"&gt;recent upgrade&lt;/a&gt; of our production Cloud SQL databases to Enterprise Plus required an &lt;em&gt;in-place&lt;/em&gt; process to minimize downtime for our 2TB instance. This meant that we couldn't explore the new &lt;strong&gt;C4A&lt;/strong&gt; ARM option offered under Enterprise Plus offers, as it currently does not support in-place upgrades (I am told Google is working on it though).&lt;/p&gt;

&lt;p&gt;Given how &lt;a href="https://dev.to/dkechag/google-axion-a-new-leader-in-arm-server-performance-4im9"&gt;impressed I was when testing&lt;/a&gt; Google Axion-powered &lt;strong&gt;C4A&lt;/strong&gt; GCP VMs, I thought I should explore the &lt;strong&gt;C4A&lt;/strong&gt; Cloud SQL performance for initial use in our test environment, with an eye toward potential future production use.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Enterprise Plus: C4A vs N2&lt;/li&gt;
&lt;li&gt;Import Performance&lt;/li&gt;
&lt;li&gt;Latency&lt;/li&gt;
&lt;li&gt;
SELECTs

&lt;ul&gt;
&lt;li&gt;Very fast (&amp;lt;10ms)&lt;/li&gt;
&lt;li&gt;Fast (&amp;lt;200ms)&lt;/li&gt;
&lt;li&gt;Slow (&amp;gt;200ms)&lt;/li&gt;
&lt;li&gt;Very Slow (&amp;gt;20s)&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;UPDATEs&lt;/li&gt;

&lt;li&gt;INSERTs&lt;/li&gt;

&lt;li&gt;DELETEs&lt;/li&gt;

&lt;li&gt;Addendum: Enterprise N4&lt;/li&gt;

&lt;li&gt;Conclusions&lt;/li&gt;

&lt;/ul&gt;

&lt;h2&gt;
  
  
  Enterprise Plus: C4A vs N2
&lt;/h2&gt;

&lt;p&gt;In my testing, the 4th gen &lt;strong&gt;C4A&lt;/strong&gt; cores, based on Google's own Axion ARM CPU, prove both faster than the Intel Cascade Lake / Ice Lake &lt;strong&gt;N2&lt;/strong&gt; CPUs and they offer a &lt;em&gt;full physical core per vCPU&lt;/em&gt; instead of just a Hyper-Thread. This will surely have some throughput advantage.&lt;/p&gt;

&lt;p&gt;Additionally, &lt;strong&gt;C4A&lt;/strong&gt; as a 4th gen instance has access to the newer "&lt;em&gt;Hyperdisk&lt;/em&gt;" storage with fully configurable IOPS and throughput (instead of having them scaled depending on size). This flexibility comes with a slightly higher cost, as shown in the comparison below:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Instance&lt;/th&gt;
&lt;th&gt;N1-14&lt;/th&gt;
&lt;th&gt;N2-8&lt;/th&gt;
&lt;th&gt;C4A-8 low&lt;/th&gt;
&lt;th&gt;C4A-8 mid&lt;/th&gt;
&lt;th&gt;C4A-8 high&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;vCPU #&lt;/td&gt;
&lt;td&gt;14&lt;/td&gt;
&lt;td&gt;8&lt;/td&gt;
&lt;td&gt;8&lt;/td&gt;
&lt;td&gt;8&lt;/td&gt;
&lt;td&gt;8&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Machine Type&lt;/td&gt;
&lt;td&gt;N1 (x86)&lt;/td&gt;
&lt;td&gt;N2 (x86)&lt;/td&gt;
&lt;td&gt;C4A (ARM)&lt;/td&gt;
&lt;td&gt;C4A (ARM)&lt;/td&gt;
&lt;td&gt;C4A (ARM)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;RAM&lt;/td&gt;
&lt;td&gt;64GB&lt;/td&gt;
&lt;td&gt;64GB&lt;/td&gt;
&lt;td&gt;64GB&lt;/td&gt;
&lt;td&gt;64GB&lt;/td&gt;
&lt;td&gt;64GB&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Storage&lt;/td&gt;
&lt;td&gt;600GB (SSD)&lt;/td&gt;
&lt;td&gt;600GB (SSD)&lt;/td&gt;
&lt;td&gt;600GB (Hyperdisk)&lt;/td&gt;
&lt;td&gt;600GB (Hyperdisk)&lt;/td&gt;
&lt;td&gt;600GB (Hyperdisk)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;IOPS&lt;/td&gt;
&lt;td&gt;15000&lt;/td&gt;
&lt;td&gt;15000&lt;/td&gt;
&lt;td&gt;4200&lt;/td&gt;
&lt;td&gt;7000&lt;/td&gt;
&lt;td&gt;15000&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Throughput (MB/s)&lt;/td&gt;
&lt;td&gt;288&lt;/td&gt;
&lt;td&gt;288&lt;/td&gt;
&lt;td&gt;318&lt;/td&gt;
&lt;td&gt;700&lt;/td&gt;
&lt;td&gt;1000&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Price/h&lt;/td&gt;
&lt;td&gt;$1.25&lt;/td&gt;
&lt;td&gt;$1.23&lt;/td&gt;
&lt;td&gt;$1.24&lt;/td&gt;
&lt;td&gt;$1.36&lt;/td&gt;
&lt;td&gt;$1.63&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;The 600GB storage size matches the sample database used. For price parity, I’ve included an &lt;strong&gt;N1&lt;/strong&gt; Enterprise instance with nearly twice the cores. The &lt;strong&gt;C4A&lt;/strong&gt; was tested in three different Hyperdisk configurations:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;low&lt;/strong&gt;: for price-parity, but close to minimum possible Hyperdisk specs, so not recommended.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;mid&lt;/strong&gt;: ~10% higher cost, more realistic balance.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;high&lt;/strong&gt;: ~30% price premium, used to test scaling with high IOPS.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Benchmarks were performed using my &lt;a href="https://metacpan.org/pod/Benchmark::MCE" rel="noopener noreferrer"&gt;MCE::Benchmark&lt;/a&gt;&lt;br&gt;
 Perl module to run randomized queries with varying thread counts, simulating different load levels.&lt;/p&gt;

&lt;p&gt;A 32x C4D VM in the same region and zone as the &lt;strong&gt;MySQL 8.0&lt;/strong&gt; Cloud SQL instances handled the benchmarking, connecting via Perl DBI/DBD::mysql.&lt;/p&gt;

&lt;h2&gt;
  
  
  Import Performance
&lt;/h2&gt;

&lt;p&gt;The first task was importing the ~600GB MySQL database from a GCP bucket:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsn60mhtxoljanxsnzp7q.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsn60mhtxoljanxsnzp7q.png" alt=" " width="800" height="488"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Even the &lt;strong&gt;C4A-mid&lt;/strong&gt; shows over 35% advantage vs the &lt;strong&gt;N2&lt;/strong&gt; and twice that over the &lt;strong&gt;N1&lt;/strong&gt; instance. This is a significant difference, cutting hours from the import of such a large database.&lt;/p&gt;

&lt;h2&gt;
  
  
  Latency
&lt;/h2&gt;

&lt;p&gt;In my previous post, I briefly mentioned the latency differences between &lt;strong&gt;N1&lt;/strong&gt; and &lt;strong&gt;N2&lt;/strong&gt;. Here’s a closer look at "SELECT 1" response times across 1–64 threads:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuz6inqgddz49ycqia52w.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuz6inqgddz49ycqia52w.png" alt=" " width="800" height="584"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Impressively, the &lt;strong&gt;C4A&lt;/strong&gt; delivers an average round-trip time of just 0.1ms, making the &lt;strong&gt;N2&lt;/strong&gt;’s 0.25ms seem sluggish by comparison. The &lt;strong&gt;N1&lt;/strong&gt;, with a much slower 1.5ms latency, processes far fewer requests per second and thus doesn’t experience contention even at 64 threads.&lt;/p&gt;

&lt;h2&gt;
  
  
  SELECTs
&lt;/h2&gt;

&lt;p&gt;Randomized SELECTs based on real workloads were grouped into four categories by execution time.&lt;/p&gt;

&lt;h3&gt;
  
  
  Very fast (&amp;lt;10ms)
&lt;/h3&gt;

&lt;p&gt;For small in-memory queries, let's see the max throughput at various concurrency levels:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F73xplhpep20vsj41u60a.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F73xplhpep20vsj41u60a.png" alt=" " width="800" height="800"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Having twice the hardware cores vs &lt;strong&gt;N2&lt;/strong&gt;, the &lt;strong&gt;C4A&lt;/strong&gt; saturates slower and maxes out at over twice the throughput. We can also see the same data as a query response time:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fywbvly6g4nafdsp6bsn1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fywbvly6g4nafdsp6bsn1.png" alt=" " width="800" height="629"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;It's clear that under load (8+ threads) &lt;strong&gt;C4A&lt;/strong&gt; does not slow down nearly as fast as &lt;strong&gt;N2&lt;/strong&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Fast (&amp;lt;200ms)
&lt;/h3&gt;

&lt;p&gt;Similar behaviour, also in-memory queries, &lt;strong&gt;N2&lt;/strong&gt; saturates here at 4 threads, &lt;strong&gt;C4A&lt;/strong&gt; increases throughput past 8:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdenl3cdnxj3tlw8gwsd1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdenl3cdnxj3tlw8gwsd1.png" alt=" " width="800" height="800"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwa0mwfvauecgbn49w4b4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwa0mwfvauecgbn49w4b4.png" alt=" " width="800" height="629"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Slow (&amp;gt;200ms)
&lt;/h3&gt;

&lt;p&gt;For more complex queries involving larger joins, the &lt;strong&gt;C4A&lt;/strong&gt;’s advantage narrows. At 64-thread concurrency, performance collapses as the workload becomes I/O bound:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo812h728pkd8mipsgxt3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo812h728pkd8mipsgxt3.png" alt=" " width="800" height="800"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsu1fs0dxfmkpy8tjj2ex.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsu1fs0dxfmkpy8tjj2ex.png" alt=" " width="800" height="584"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Interestingly, the &lt;strong&gt;N2&lt;/strong&gt;’s storage (with higher theoretical IOPS) outperforms the &lt;em&gt;mid&lt;/em&gt;-tier Hyperdisk in these extreme scenarios. This does not imply that maximum Hyperdisk specs are required - if your MySQL workload depends that heavily on disk I/O, you are probably doing something wrong.&lt;/p&gt;

&lt;h3&gt;
  
  
  Very Slow (&amp;gt;20s)
&lt;/h3&gt;

&lt;p&gt;For very slow reporting queries, once warmed up, C4A maintains a 25–30% lead over N2.&lt;/p&gt;

&lt;p&gt;For very slow report queries, after the database is warmed up, we can see a 25-30% advantage of &lt;strong&gt;C4A&lt;/strong&gt; over &lt;strong&gt;N2&lt;/strong&gt;:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8vyvhuaqsocbe0ygvrge.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8vyvhuaqsocbe0ygvrge.png" alt=" " width="800" height="487"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;It may also be interesting to see a "warmup" run on a cold database, where data will be read from the disk:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd7pke95j1zvysci00uue.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd7pke95j1zvysci00uue.png" alt=" " width="800" height="488"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As we saw before, in extreme disk access cases such as this cold start, &lt;strong&gt;N2&lt;/strong&gt;’s higher base IOPS outperform the &lt;em&gt;mid&lt;/em&gt;-tier Hyperdisk configuration.&lt;/p&gt;

&lt;h2&gt;
  
  
  UPDATEs
&lt;/h2&gt;

&lt;p&gt;In a mix of updates, including compound indexes, which means both CPU and disk access are involved, the &lt;strong&gt;N2&lt;/strong&gt; seemed to keep up better with the &lt;strong&gt;C4A&lt;/strong&gt;:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4px27dnxeys4uhwsygph.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4px27dnxeys4uhwsygph.png" alt=" " width="800" height="800"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkyy6tfj486cmw9gdrl0l.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkyy6tfj486cmw9gdrl0l.png" alt=" " width="800" height="584"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The &lt;em&gt;low&lt;/em&gt;-spec Hyperdisk configuration quickly becomes a bottleneck; at least the &lt;em&gt;mid&lt;/em&gt; level is needed for balanced performance. &lt;strong&gt;N1&lt;/strong&gt; trails significantly once again.&lt;/p&gt;

&lt;h2&gt;
  
  
  INSERTs
&lt;/h2&gt;

&lt;p&gt;A mix of complex inserts, with the disk access playing a bigger role:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftmfknh9u1o0x12lf2fqn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftmfknh9u1o0x12lf2fqn.png" alt=" " width="800" height="800"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F88ferobqh8x6tp82k3ks.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F88ferobqh8x6tp82k3ks.png" alt=" " width="800" height="584"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;It's quite interesting to see how throughput increases with load, until there is some sort of storage-related bottleneck hit. It happens quickly with the &lt;em&gt;low&lt;/em&gt; Hyperdisk, then with the &lt;em&gt;mid&lt;/em&gt; Hyperdisk, while the &lt;em&gt;high&lt;/em&gt; Hyperdisk config manages to keep up farther than the &lt;strong&gt;N1&lt;/strong&gt;/&lt;strong&gt;N2&lt;/strong&gt; solutions.&lt;/p&gt;

&lt;h2&gt;
  
  
  DELETEs
&lt;/h2&gt;

&lt;p&gt;I tested 2 types of mass DELETE statements - one with a &lt;code&gt;LIMIT x&lt;/code&gt; loop, the other with passing ids as bind variables:&lt;/p&gt;

&lt;p&gt;Two types of bulk DELETE operations were tested: one using a &lt;code&gt;LIMIT x&lt;/code&gt; loop and another using bound ID variables (single-threaded for consistency).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh58np9a93ti9arxvh9su.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh58np9a93ti9arxvh9su.png" alt=" " width="800" height="541"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;There is a near 20% advantage for &lt;strong&gt;C4A&lt;/strong&gt;, as long as the IOPS of the Hyperdisk are not set too low.&lt;/p&gt;

&lt;h2&gt;
  
  
  Addendum: Enterprise N4
&lt;/h2&gt;

&lt;p&gt;I performed the above tests over the summer. In the meantime Google released Enterprise instances based on their 4th Gen x86 architecture (Intel Emerald Rapids, Hyperdisks), so I thought I'd add a quick comparison between the &lt;strong&gt;N4&lt;/strong&gt; Enterprise and &lt;strong&gt;N2&lt;/strong&gt; Enterprise plus for completeness. For about the same price you can get 50% more &lt;strong&gt;N4&lt;/strong&gt; cores than &lt;strong&gt;N2&lt;/strong&gt;, keeping the Hyperdisk IOPS at the &lt;em&gt;mid&lt;/em&gt; levels we used in the previous tests. Similar to the &lt;strong&gt;C4A&lt;/strong&gt;, you cannot currently convert an &lt;strong&gt;N1&lt;/strong&gt; or &lt;strong&gt;N2&lt;/strong&gt; instance to an &lt;strong&gt;N4&lt;/strong&gt; due to the different storage.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Instance&lt;/th&gt;
&lt;th&gt;N4-12&lt;/th&gt;
&lt;th&gt;N2-8&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;vCPU #&lt;/td&gt;
&lt;td&gt;12&lt;/td&gt;
&lt;td&gt;8&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Machine Type&lt;/td&gt;
&lt;td&gt;N4 (x86)&lt;/td&gt;
&lt;td&gt;N2 (x86)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;RAM&lt;/td&gt;
&lt;td&gt;64GB&lt;/td&gt;
&lt;td&gt;64GB&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Storage&lt;/td&gt;
&lt;td&gt;600GB (Hyperdisk)&lt;/td&gt;
&lt;td&gt;600GB (SSD)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;IOPS&lt;/td&gt;
&lt;td&gt;7000&lt;/td&gt;
&lt;td&gt;15000&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Throughput (MB/s)&lt;/td&gt;
&lt;td&gt;480&lt;/td&gt;
&lt;td&gt;288&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Price/h&lt;/td&gt;
&lt;td&gt;$1.27&lt;/td&gt;
&lt;td&gt;$1.23&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Here are the results of the most common types of in-memory SELECTs:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzz05vstflgpuzwdb9khh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzz05vstflgpuzwdb9khh.png" alt=" " width="800" height="867"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fufb4r7ww12queg2flumm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fufb4r7ww12queg2flumm.png" alt=" " width="800" height="635"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As expected, the &lt;strong&gt;N4&lt;/strong&gt; cores are a bit faster, and you can get more for the price so give you a significantly higher throughput, making them a better value (it would mean forgoing the extra Enterprise Plus benefits of course).&lt;/p&gt;

&lt;p&gt;I tried some tests with DELETEs and some very deep SELECTs with CREs, which seemed to variously go one way or the other between the two types, but I couldn’t achieve consistent enough results in time to include them, so I will need to come back to this, along with an &lt;strong&gt;N4&lt;/strong&gt; to &lt;strong&gt;C4A&lt;/strong&gt; comparison.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusions
&lt;/h2&gt;

&lt;p&gt;The takeaways as I see them:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;For &lt;strong&gt;maximum in-memory query performance &amp;amp; throughput&lt;/strong&gt;, you should go to &lt;strong&gt;C4A&lt;/strong&gt;. It will cost a bit more as you should pair it with a Hyperdisk that has reasonable specs.&lt;/li&gt;
&lt;li&gt;In the possibly less common case where &lt;strong&gt;disk I/O&lt;/strong&gt; is the main concern, &lt;strong&gt;N2&lt;/strong&gt; may be your budget solution with the disk I/O performing on par or better than a &lt;em&gt;mid&lt;/em&gt; spec Hyperdisk at a lower price. However, for I/O performance regardless of budget, the Hyperdisk of a &lt;strong&gt;C4A&lt;/strong&gt; can be configured to be much faster at an additional cost.&lt;/li&gt;
&lt;li&gt;The addition of the &lt;strong&gt;N4&lt;/strong&gt; option for Enterprise with better performance makes the &lt;strong&gt;N2&lt;/strong&gt; less attractive if you don't mind losing the Enterprise Plus features. The &lt;strong&gt;C4A&lt;/strong&gt; will still have an edge in performance, due to slightly faster cores (and full core/vCPU), but I will have to follow up with some benchmarks on it.&lt;/li&gt;
&lt;li&gt;Overall, I would be choosing between &lt;strong&gt;N4&lt;/strong&gt; &amp;amp; &lt;strong&gt;C4A&lt;/strong&gt; for new deployments. Existing non-Hyperdisk instances are not currently directly upgradeable to them, so it is not as easy to take advantage of them in existing deployments.&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>googlcloud</category>
      <category>gcp</category>
      <category>mysql</category>
      <category>database</category>
    </item>
    <item>
      <title>Is Cloud SQL Enterprise Plus Worth It? Evaluating the MySQL upgrade at SpareRoom</title>
      <dc:creator>Dimitrios Kechagias</dc:creator>
      <pubDate>Mon, 08 Sep 2025 03:15:23 +0000</pubDate>
      <link>https://forem.com/dkechag/is-cloud-sql-enterprise-plus-worth-it-evaluating-the-mysql-upgrade-at-spareroom-2m1p</link>
      <guid>https://forem.com/dkechag/is-cloud-sql-enterprise-plus-worth-it-evaluating-the-mysql-upgrade-at-spareroom-2m1p</guid>
      <description>&lt;p&gt;At &lt;a href="https://www.spareroom.co.uk" rel="noopener noreferrer"&gt;SpareRoom&lt;/a&gt;, &lt;strong&gt;MySQL&lt;/strong&gt; is a core part of our infrastructure and it's been running on &lt;strong&gt;Google Cloud SQL&lt;/strong&gt; for about five years. Earlier this year we faced a major change: upgrading to MySQL 8. This gave us the option of switching from &lt;strong&gt;Enterprise&lt;/strong&gt; instances (old N1 machine family), to &lt;strong&gt;Enterprise Plus&lt;/strong&gt;, which features the newer N2 machines.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Enterprise Plus&lt;/strong&gt; is more expensive, has very specific configuration options (vCPU in powers of 2, fixed 8GB RAM/vCPU), several new features, but rather vague promises of performance improvements. Without clear benchmarks, it wasn’t obvious whether the extra cost was justified, or what the right size of instance should be.&lt;/p&gt;

&lt;p&gt;This was obviously the time to set up my own benchmarks.&lt;/p&gt;

&lt;p&gt;Quick table of contents:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
The Benchmark Setup

&lt;ul&gt;
&lt;li&gt;Top-5 Queries&lt;/li&gt;
&lt;li&gt;DELETEs and Schema Changes&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;Real World Performance&lt;/li&gt;

&lt;li&gt;Verdict&lt;/li&gt;

&lt;/ul&gt;

&lt;h2&gt;
  
  
  The Benchmark Setup
&lt;/h2&gt;

&lt;p&gt;As I always do, I focused on workloads that reflect real-world pain points for us:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Top queries:&lt;/strong&gt; quick (tables fit in memory) but very high-volume SELECTs (searches, views, clicks, etc.) that drive peak load.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Batch deletes:&lt;/strong&gt; archiving and clearing out large log tables, which must fit limited nightly windows.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Schema changes:&lt;/strong&gt; migrations that tie up our pipeline for the longest.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Our baseline DB was a &lt;strong&gt;24 vCPU / 90GB RAM&lt;/strong&gt; Enterprise instance. The upgrade tool defaults to "next up", so &lt;strong&gt;32 vCPU / 256GB Enterprise Plus&lt;/strong&gt; (more than double the cost, so not viable). Instead, we compared our baseline to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Enterprise Plus 16x vCPU&lt;/strong&gt; (most reasonable candidate, DC on/off versions)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Enterprise Plus 8x vCPU&lt;/strong&gt; (scaling baseline)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Enterprise 28x vCPU&lt;/strong&gt; (rough cost match to 16x Plus when keeping baseline 90GB RAM)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Enterprise 16x vCPU&lt;/strong&gt; (direct comparison to 16x Plus)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;DC&lt;/strong&gt; above is &lt;strong&gt;Data Cache&lt;/strong&gt;, an optional locally attached SSD storage that can improve read performance. All the above in a table:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Instance&lt;/th&gt;
&lt;th&gt;vCPUs&lt;/th&gt;
&lt;th&gt;Machine&lt;/th&gt;
&lt;th&gt;RAM (GB)&lt;/th&gt;
&lt;th&gt;Storage (GB)&lt;/th&gt;
&lt;th&gt;IOPS&lt;/th&gt;
&lt;th&gt;Price/h&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Enterprise x16&lt;/td&gt;
&lt;td&gt;16&lt;/td&gt;
&lt;td&gt;N1&lt;/td&gt;
&lt;td&gt;64&lt;/td&gt;
&lt;td&gt;1500&lt;/td&gt;
&lt;td&gt;25,000&lt;/td&gt;
&lt;td&gt;$1.64&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Enterprise x24&lt;/td&gt;
&lt;td&gt;24&lt;/td&gt;
&lt;td&gt;N1&lt;/td&gt;
&lt;td&gt;90&lt;/td&gt;
&lt;td&gt;1500&lt;/td&gt;
&lt;td&gt;25,000&lt;/td&gt;
&lt;td&gt;$1.97&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Enterprise x28&lt;/td&gt;
&lt;td&gt;28&lt;/td&gt;
&lt;td&gt;N1&lt;/td&gt;
&lt;td&gt;90&lt;/td&gt;
&lt;td&gt;1500&lt;/td&gt;
&lt;td&gt;25,000&lt;/td&gt;
&lt;td&gt;$2.40&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Enterprise Plus x8&lt;/td&gt;
&lt;td&gt;8&lt;/td&gt;
&lt;td&gt;N2&lt;/td&gt;
&lt;td&gt;64&lt;/td&gt;
&lt;td&gt;1500&lt;/td&gt;
&lt;td&gt;15,000&lt;/td&gt;
&lt;td&gt;$1.36&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Enterprise Plus x16&lt;/td&gt;
&lt;td&gt;16&lt;/td&gt;
&lt;td&gt;N2&lt;/td&gt;
&lt;td&gt;128&lt;/td&gt;
&lt;td&gt;1500&lt;/td&gt;
&lt;td&gt;25,000&lt;/td&gt;
&lt;td&gt;$2.37&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Enterprise Plus x16 DC&lt;/td&gt;
&lt;td&gt;16&lt;/td&gt;
&lt;td&gt;N2&lt;/td&gt;
&lt;td&gt;128&lt;/td&gt;
&lt;td&gt;1500 + 750 (DC)&lt;/td&gt;
&lt;td&gt;25,000&lt;/td&gt;
&lt;td&gt;$2.54&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Benchmarks were run from a fast 64 vCPU C4D VM in the same region and zone as the Cloud SQL servers.&lt;/p&gt;

&lt;h3&gt;
  
  
  Top-5 Queries
&lt;/h3&gt;

&lt;p&gt;Using Cloud SQL Query Insights, I pulled our top-5 queries and ran them in a weighted benchmark (queries balanced by duration) with randomized bound variables. The benchmark ran with my &lt;a href="https://metacpan.org/pod/Benchmark::MCE" rel="noopener noreferrer"&gt;MCE::Benchmark&lt;/a&gt; Perl module, which loads it on multiple threads to simulate any amount of load. Results with Data Cache on vs off were inconsistent and since in-memory SELECTs are not supposed to benefit from it anyway, I excluded it from analysis.&lt;/p&gt;

&lt;p&gt;After warming up the DB, I ran the benchmark on 12 threads to see performance when the DB is at around or below 50% load (i.e. normal operation at peak usage) as well as 32 threads to get an idea of full load behaviour:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fioa4mzlfbalr9l68kpcb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fioa4mzlfbalr9l68kpcb.png" alt="Query Benchmark" width="800" height="453"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;At medium load (12 threads):&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Enterprise Plus&lt;/strong&gt; 16x was &lt;strong&gt;~45% faster&lt;/strong&gt; than &lt;strong&gt;Enterprise&lt;/strong&gt; 24x/28x.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Enterprise Plus&lt;/strong&gt; 8x was small enough to be close to full load, but still kept up with the big &lt;strong&gt;Enterprise&lt;/strong&gt; instances.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;At high load (32 threads):&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;With 75% more cores, &lt;strong&gt;Enterprise&lt;/strong&gt; 28 finally caught up at high load. But even in this scenario, which is the worst case for &lt;strong&gt;Enterprise Plus&lt;/strong&gt;, it did not show a disadvantage.&lt;/li&gt;
&lt;li&gt;The &lt;strong&gt;Enterprise Plus&lt;/strong&gt; 8x kept up with &lt;strong&gt;Enterprise&lt;/strong&gt; 16x (while costing ~17% less).&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  DELETEs and Schema Changes
&lt;/h3&gt;

&lt;p&gt;Deleting millions of rows (25k at a time in a loop), modifying the definition of a column and adding an index were measured next. The number of cores has very little importance (especially for the schema changes), it's more CPU type and storage that matters:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu0829elp97rmdesxhbtw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu0829elp97rmdesxhbtw.png" alt="Delete / Schema Change benchmark" width="800" height="452"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;DELETEs&lt;/strong&gt; are &lt;strong&gt;25-40% faster&lt;/strong&gt; on &lt;strong&gt;Enterprise Plus&lt;/strong&gt;, depending on whether you have Data Cache enabled.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Schema changes&lt;/strong&gt; are at a more modest &lt;strong&gt;5-15%&lt;/strong&gt; difference, with &lt;strong&gt;Enterprise Plus&lt;/strong&gt; always having a small edge.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Real World Performance
&lt;/h2&gt;

&lt;p&gt;Armed with the above results, we switched most of our DBs to (smaller) &lt;strong&gt;Enterprise Plus&lt;/strong&gt; instances. The above &lt;strong&gt;Enterprise&lt;/strong&gt; 24x became an &lt;strong&gt;Enterprise Plus&lt;/strong&gt; 16x. Using Query Insights I saved the performance of the heaviest queries during the last week of Enterprise and compared them with the performance of the same queries during the first week of &lt;strong&gt;Enterprise Plus&lt;/strong&gt; to calculate performance gains per query. The two weeks were reasonably comparable in traffic and the 32 query types plotted below represent about 360 million calls for each respective week:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4xn02u8aee8dijf0j0m0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4xn02u8aee8dijf0j0m0.png" alt="Real World queries" width="800" height="525"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The average performance gain is about 35%. This is the actual execution time, vs the total round-trip time of the synthetic benchmark above that was closer to 45%. Some of the gap is likely due to network latency, which also improved slightly on Enterprise Plus.&lt;/p&gt;

&lt;p&gt;I will follow up with a post exploring latency and performance differences further, along with comparing non-x86 options.&lt;/p&gt;

&lt;h2&gt;
  
  
  Verdict
&lt;/h2&gt;

&lt;p&gt;Overall, &lt;strong&gt;Enterprise Plus&lt;/strong&gt; cost us &lt;strong&gt;~20%&lt;/strong&gt; more (due to the limited configuration options). However, the performance gains, as well as the extra benefits (better SLA, near-zero downtime maintenance, 35 day Point-in-Time recovery, index advisor and more), made it a clear win.&lt;/p&gt;

&lt;p&gt;If you’re running Cloud SQL on &lt;strong&gt;Enterprise&lt;/strong&gt; and considering the move:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Don’t assume you need the same or higher core count like the upgrade forces you to. Smaller &lt;strong&gt;Enterprise Plus&lt;/strong&gt; can outperform bigger &lt;strong&gt;Enterprise&lt;/strong&gt;, so downsize after your upgrade.&lt;/li&gt;
&lt;li&gt;Expect a tangible real-world performance boost.&lt;/li&gt;
&lt;li&gt;The reliability/maintenance and other features are a nice bonus on top.&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>googlecloud</category>
      <category>gcp</category>
      <category>mysql</category>
      <category>database</category>
    </item>
    <item>
      <title>Benchmark::MCE on CPAN</title>
      <dc:creator>Dimitrios Kechagias</dc:creator>
      <pubDate>Wed, 13 Aug 2025 23:00:25 +0000</pubDate>
      <link>https://forem.com/dkechag/benchmarkmce-on-cpan-1ah4</link>
      <guid>https://forem.com/dkechag/benchmarkmce-on-cpan-1ah4</guid>
      <description>&lt;p&gt;I recently refactored the multi-core benchmarking framework I've been using for my Perl CPU benchmark suite (&lt;a href="https://metacpan.org/pod/Benchmark::DKbench" rel="noopener noreferrer"&gt;Benchmark::DKbench&lt;/a&gt;) and released it as  a separate module: &lt;a href="https://metacpan.org/pod/Benchmark::MCE" rel="noopener noreferrer"&gt;Benchmark::MCE&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Why spin it out? Because the harness can do more: it can be used to write custom benchmark suites of any type, generate massively parallel workloads for stress testing, or run throughput benchmarks against services and APIs.&lt;/p&gt;

&lt;p&gt;The exact scenario that prompted me was a comparison of Cloud SQL database instances. We wanted to see how a 16-CPU Enterprise Plus instance would compare to a 24-CPU Enterprise instance under heavy load. One way to do that is to write one or more functions that run randomized, typical/heavy queries (e.g. random searches for &lt;a href="https://www.spareroom.co.uk/" rel="noopener noreferrer"&gt;SpareRoom&lt;/a&gt; ads in our case), then use &lt;a href="https://metacpan.org/pod/Benchmark::MCE" rel="noopener noreferrer"&gt;Benchmark::MCE&lt;/a&gt; to time them running on dozens of parallel &lt;a href="https://metacpan.org/pod/MCE" rel="noopener noreferrer"&gt;MCE&lt;/a&gt; workers to simulate high load:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight perl"&gt;&lt;code&gt;&lt;span class="k"&gt;use&lt;/span&gt; &lt;span class="nn"&gt;Benchmark::&lt;/span&gt;&lt;span class="nv"&gt;MCE&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="nv"&gt;suite_run&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
  &lt;span class="s"&gt;threads&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="mi"&gt;64&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;   &lt;span class="c1"&gt;# Parallel workers&lt;/span&gt;
  &lt;span class="s"&gt;scale&lt;/span&gt;   &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="mi"&gt;100&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;  &lt;span class="c1"&gt;# Workload multiplier (x times to call code)&lt;/span&gt;
  &lt;span class="s"&gt;bench&lt;/span&gt;   &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="s"&gt;Name1&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="k"&gt;sub &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="o"&gt;...&lt;/span&gt;&lt;span class="nv"&gt;code1&lt;/span&gt;&lt;span class="o"&gt;...&lt;/span&gt; &lt;span class="p"&gt;},&lt;/span&gt;
    &lt;span class="o"&gt;...&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is the simplified syntax, but since it was part of a benchmark suite I've been developing for a while now, there are many more features (verifying correctness, iterating stats, single/multi-core scaling etc).&lt;/p&gt;

</description>
      <category>perl</category>
      <category>cpan</category>
    </item>
    <item>
      <title>AI Image Creation: ChatGPT vs Gemini vs DALL·E vs Grok</title>
      <dc:creator>Dimitrios Kechagias</dc:creator>
      <pubDate>Thu, 19 Jun 2025 01:29:59 +0000</pubDate>
      <link>https://forem.com/dkechag/ai-image-creation-chatgpt-vs-gemini-vs-dalle-vs-grok-558e</link>
      <guid>https://forem.com/dkechag/ai-image-creation-chatgpt-vs-gemini-vs-dalle-vs-grok-558e</guid>
      <description>&lt;p&gt;Over a year ago &lt;a href="https://dev.to/dkechag/google-slides-duet-ai-vs-microsoft-bing-image-creator-dalle-3-38d9"&gt;I published a comparison&lt;/a&gt; of Google's Duet AI image generation with Microsoft's DALL·E 3 powered image creator. The focus was image generation for presentations, articles or apps and the results were promising, even though there were spectacular failures in a few subjects. I am revisiting the exact same prompts with the "current crop" of AI generators.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;a href="https://chatgpt.com/" rel="noopener noreferrer"&gt;OpenAI ChatGPT Plus&lt;/a&gt; (4o):&lt;/strong&gt; While the popular chat originally used DALL·E 3 exclusively, it recently switched to its 4o Image Generation for paid accounts and this is the version I will be testing (on a "Team" account). The free accounts seem to still be limited to DALL·E 3 (in addition to slow generation during peak). One image is generated per prompt. Only Microsoft Image Creator (DALL·E 3) remains unchanged since the last test — all others are new or upgraded.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;a href="https://gemini.google.com/" rel="noopener noreferrer"&gt;Google Gemini&lt;/a&gt; (Imagen 4):&lt;/strong&gt; It is the successor to the Duet AI I was testing in the previous comparison. It is a significant improvement and the free chat plan does include image generation (with a daily limit). For this comparison I used Gemini Enterprise from within Google Slides, which uses the same engine just generates 4 images per prompt by default. I did the comparison with the Imagen 3 engine and had to redo all the Gemini images as the improved Imagen 4 was released while I was writing the report.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;a href="https://designer.microsoft.com/image-creator" rel="noopener noreferrer"&gt;Microsoft Image Creator&lt;/a&gt; (DALL·E 3):&lt;/strong&gt; Free for 15 image generations daily (with 4 images per prompt), using DALL·E 3. This is the only engine also used in my previous comparison, performing pretty much the same.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;a href="https://grok.com/" rel="noopener noreferrer"&gt;xAI Grok&lt;/a&gt;:&lt;/strong&gt; The free Grok chat allows you 10 images per 2 hours with its Grok 3 image generation, which is quite generous and should be enough for most non-pro use cases. A single image is created per prompt.&lt;/p&gt;

&lt;h3&gt;
  
  
  Table of Contents for convenience:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
The Comparison Tests

&lt;ul&gt;
&lt;li&gt;The Camels&lt;/li&gt;
&lt;li&gt;Astronomy&lt;/li&gt;
&lt;li&gt;Technical Drawings&lt;/li&gt;
&lt;li&gt;Logo/Splash Graphic Design&lt;/li&gt;
&lt;li&gt;Blog Header Image&lt;/li&gt;
&lt;li&gt;Sloths with Headphones&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;Conclusion&lt;/li&gt;

&lt;/ul&gt;

&lt;h2&gt;
  
  
  The Comparison Tests
&lt;/h2&gt;

&lt;p&gt;In my previous comparison I tried to recreate slide deck images I had used in various talks, app icons and header images to dev.to articles. I am using the exact same methodology to see whether there has been progress since last year.&lt;/p&gt;

&lt;p&gt;There are 12 comparison image rounds in total where all engines get the same prompts. Unlike last year, I did not tweak prompts based on results. This time I used the exact same prompts from the 2024 test for a more controlled and direct comparison.&lt;/p&gt;

&lt;p&gt;As before, each round was rated from 0 to 10 based on the proximity of the result to expectations, suitability for the intended purpose and adherence to instructions. The ratings are subjective, but the images are included so you can draw your own conclusions.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Camels
&lt;/h3&gt;

&lt;p&gt;For the last few years I've been giving talks at the &lt;a href="https://tprc.us/" rel="noopener noreferrer"&gt;Perl and Raku conference&lt;/a&gt;. My presentations always feature some sort of camel, as that's the most recognised Perl symbol, so I tried to recreate a couple of camel images I have used in the past.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Running Camel&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For a &lt;a href="https://www.youtube.com/watch?v=V3xuYOgELM4&amp;amp;pp=ygUYZGltaXRyaW9zIGtlY2hhZ2lhcyBwZXJs" rel="noopener noreferrer"&gt;Software Performance Optimization&lt;/a&gt; talk, an image of a running camel was used (an inexpensive stock photo):&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkklg7dvdcbtpcl5hsqzn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkklg7dvdcbtpcl5hsqzn.png" alt="Running Camel" width="512" height="279"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The simplest prompt I could think of:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;a happy camel running&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The results:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvui1lpfdr5p99ndmqa40.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvui1lpfdr5p99ndmqa40.jpg" alt="Happy camel running" width="800" height="1021"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;ChatGPT is simple and quite good - I got a cartoon, but I did not actually specify otherwise. Microsoft's is also not realistic, but kind of very specific and weird style. Gemini gives both a cartoonish and 3 realistic camels running, while Grok's don't seem to be running, and they are too zoomed-in to tell for sure.&lt;/p&gt;

&lt;p&gt;Specifying:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;a photo-realistic happy camel running, single colour background&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwv96gw9237ehi9ouc1l6.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwv96gw9237ehi9ouc1l6.jpg" alt="Happy camel simple bkg" width="800" height="1021"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;ChatGPT and Gemini are pretty much what I had in mind. DALL·E is not photorealistic and Grok did not really improve at all with my second prompt, not even the background was simplified.&lt;/p&gt;

&lt;p&gt;Let's generate some scores:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;&lt;/th&gt;
&lt;th&gt;&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;ChatGPT&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;10/10&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Gemini&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;10/10&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Image Creator&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;6/10&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Grok&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;4/10&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Google already had good results last year, but ChatGPT is added as a top performer as well.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Camel with Glasses&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;A camel with glasses was used for the &lt;a href="https://www.youtube.com/watch?v=vw4y506TsDA&amp;amp;pp=ygUYZGltaXRyaW9zIGtlY2hhZ2lhcyBwZXJs" rel="noopener noreferrer"&gt;Fast Perceptual Image Hashing&lt;/a&gt; talk, created from two stock images:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbwsq3x1d2g98m3plur0x.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbwsq3x1d2g98m3plur0x.png" alt="Camel Glasses" width="502" height="600"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;It's a bit rudimentary as I am neither a designer, nor did I want to spend much time on it. Perhaps an AI generator could have managed this with an appropriate prompt:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;A smiling camel looking at us through big blue glasses, single colour background, photo-realistic&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F61mt8xre0g3wl365k06v.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F61mt8xre0g3wl365k06v.jpg" alt="Camel with glasses" width="800" height="1021"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;At this point I feel like I was too lax with scoring last year, as I had given Duet AI a 10, while ChatGPT and Google's own Gemini are now clearly better. Grok has the issue of the glasses being a bit off center compared to the eyes, which may be realistic in how glasses would not fit a camel that well, but that's not really what we are after, is it? :)&lt;/p&gt;

&lt;p&gt;In the end I decided to retroactively adjust last year scores by -1 to show the meaningful improvement. I came close to adjusting a couple more scores, but that was a bit of a slippery slope, so in the end I adjusted only the most egregious examples (this and the sloths at the end) by -1.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;&lt;/th&gt;
&lt;th&gt;&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;ChatGPT&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;10/10&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Gemini&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;10/10&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Image Creator&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;7/10&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Grok&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;9/10&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h3&gt;
  
  
  Astronomy
&lt;/h3&gt;

&lt;p&gt;I used to do talks for my local astronomy club, so I picked some of the slides I used for a &lt;a href="https://drive.google.com/open?id=1O7uUGL0kI3suUPUEx_yLbmfZGcXXYJ4C" rel="noopener noreferrer"&gt;"Choosing Binoculars" talk&lt;/a&gt;.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt; &lt;strong&gt;Photos of Astro Objects&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This slide with examples of objects as you'd view them with binoculars was going to be a long-shot:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flakxslhcxwwhlc3uq240.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flakxslhcxwwhlc3uq240.png" alt="Binocular objects" width="800" height="599"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Without the "how would they look through binoculars" element, I tried to give a list of objects to see if I could get the sort of "astrophotos on canvas" style above:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;A compilation of photos one each of the astronomical objects: the moon, pleiades, orion nebula, andromeda galaxy, the Double Cluster,  comet Lovejoy, arranged randomly on a canvas with slight overlaps&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjblpsaxgufmi6bfcxz7u.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjblpsaxgufmi6bfcxz7u.jpg" alt="Astrophotos 1" width="800" height="1021"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;ChatGPT is the only one that gets the right number of images, even though it just does repeats giving 2x Andromeda galaxies and 3x Pleiades. Grok is visually interesting, but not close to what I asked, while Google's and Microsoft's solutions have tons of not overly realistic objects, with Gemini adding some labels full of typos.&lt;/p&gt;

&lt;p&gt;Repeating last year's second attempt:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;A compilation of 6 photo-realistic photos, one each of the astronomical objects: the moon, pleiades, orion nebula, andromeda galaxy, the Double Cluster, comet Lovejoy, arranged randomly on a bigger canvas, some overlap is allowed&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv87jqi8z51940oorlj9j.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv87jqi8z51940oorlj9j.jpg" alt="Astrophotos 2" width="800" height="1021"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Gemini does probably worse than last year. Adding the label "Pleades" (sic) to something random does not make it the Pleiades... And it still cannot count. Interestingly, the previous Imagen 3 engine Gemini was using until a few days ago did a bit better. DALL*E can't do anything useful as we saw last year and Grok, again visually interesting, with realistic-looking objects, but not getting the "photos on canvas" instructions. ChatGPT actually does well, if there was no repeat of one of the 6 images, it would have been perfect.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;&lt;/th&gt;
&lt;th&gt;&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;ChatGPT&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;8/10&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Gemini&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;2/10&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Image Creator&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;1/10&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Grok&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;3/10&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Stack of Binoculars&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This photo is from the same presentation:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe6bmgs29x5zfvjo11fcz.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe6bmgs29x5zfvjo11fcz.jpg" alt="Stacked Binoculars" width="257" height="307"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;4 binoculars stacked on top of each other from largest (bottom) to smallest (top), with their lens pointed towards our viewpoint, photo-realistic&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6m9go9kc467y99jlyi9s.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6m9go9kc467y99jlyi9s.jpg" alt="Stacked Binoculars 1" width="800" height="1021"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I think we established last year that DALL-E 3 does not understand binoculars. Add Grok to this category, the results are photorealistic but outlandish binocular-inspired depictions. Google's Imagen 4 update is a big improvement (just a few days ago I was getting results close to last year's Duet AI), with usable results. ChatGPT's latest solution gets it right on the first try.&lt;/p&gt;

&lt;p&gt;Alternative prompt with more hints:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;4 pairs of binoculars of various types, stacked on top of each other from largest (bottom) to smallest (top) with their lens pointing to viewer, photo-realistic&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsfkflvi9yac2dqlnd7qs.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsfkflvi9yac2dqlnd7qs.jpg" alt="Stacked Binoculars 2" width="800" height="1021"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I was very lenient last year with Duet AI managing to produce one usable image on the second try, ChatGPT and Gemini go far beyond by producing good results from the get go and improving with more hints. Bing and Grok are alien to the concept of binoculars.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;&lt;/th&gt;
&lt;th&gt;&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;ChatGPT&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;10/10&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Gemini&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;10/10&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Image Creator&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;1/10&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Grok&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;2/10&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Lawn Chair Binocular Mount&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Again on the same slide deck, there was an image of one of the various DIY "lawn chair binocular mounts" that can be impressive and sort of amusing:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvc8u3j1s65tyx9y5c5ac.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvc8u3j1s65tyx9y5c5ac.jpg" alt="Bino Chair" width="700" height="554"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;They are sometimes called "bino-chairs" and require some design creativity and ingenuity, so it was a different type of test for the AI engines.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;man on lawn chair using hands-free binocular mount&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa57g8h3g4t3uzeeu9d7x.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa57g8h3g4t3uzeeu9d7x.jpg" alt="Bino Chair" width="800" height="1021"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Image Creator gets a lenient 5 as it gets a sort of tripod in one of the images (but not completely hands-free). ChatGPT and Gemini gave great images, I will deduct one point from Gemini for the distinction of ChatGPT getting the "hands-free" aspect with no extra direction (Gemini gets there easily if you add to the prompt). Grok would have been spot on, if the binoculars were not the wrong way around!&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;&lt;/th&gt;
&lt;th&gt;&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;ChatGPT&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;10/10&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Gemini&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;9/10&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Image Creator&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;5/10&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Grok&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;6/10&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h3&gt;
  
  
  Technical Drawings
&lt;/h3&gt;

&lt;p&gt;For technical talks, explanatory illustrations are often required. E.g. the following image from the &lt;a href="https://www.youtube.com/watch?v=vw4y506TsDA" rel="noopener noreferrer"&gt;Perceptual Image Hashing&lt;/a&gt; talk shows which cells of a 6x6 matrix are used for a specific hash:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fme7ipel8v03huqn18vbq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fme7ipel8v03huqn18vbq.png" alt="Matrix" width="403" height="379"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;draw a symmetric 6x6 square matrix with white lines, make the top-left cell black, also the cells that are below the bottom-left to top-right diagonal also black, and the rest blue, 2d art style&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;At this point, Grok started malfunctioning. First it started giving me images that incorporated the previous prompt for no reason:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F81jbf1n6ghqjdkn5hiqe.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F81jbf1n6ghqjdkn5hiqe.png" alt="Grok matrix" width="800" height="302"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Even including "forget context" etc instructions in the prompt along with the instructions did not help, but a "Forget previous images. Start from scratch." prompt did make it exclaim it "understood" and it will start afresh, then giving me a broken image link. When I complained I can't see it, I finally got a result. Not a great result though:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fiaorjb469zcla5zl6zho.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fiaorjb469zcla5zl6zho.jpg" alt="Matrix 1" width="800" height="1021"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;So, Grok, Gemini and Image Creator are trying to hard to go wildly off script. ChatGPT almost got it perfect. There's a small error about which diagonal the black squares fall under, but it's close, it even automatically switched to square format output. &lt;/p&gt;

&lt;p&gt;Going to the very basics, replacing even the word matrix with "grid" to see if the others can be helped.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;plain 6x6 square grid, solid colour background, vector drawing&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb4i5zf2p8drrrp3b1xzh.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb4i5zf2p8drrrp3b1xzh.jpg" alt="Matrix 2" width="800" height="1021"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;ChatGPT nails it, Gemini gets it in 2 out 4 examples, while the other two are just wildly off. &lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;&lt;/th&gt;
&lt;th&gt;&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;ChatGPT&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;9/10&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Gemini&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;4/10&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Image Creator&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;1/10&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Grok&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;1/10&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h3&gt;
  
  
  Logo/Splash Graphic Design
&lt;/h3&gt;

&lt;p&gt;Moving on to a couple of logos / splash screens I designed for the iOS apps I develop as a hobby. It's not typical slide deck graphics, but it could still be relevant if you are putting together a new project / product and creating a presentation for it.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Polar Scope Align Icon&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://apps.apple.com/us/app/polar-scope-align/id970157965" rel="noopener noreferrer"&gt;Polar Scope Align&lt;/a&gt; is an iOS app for amateur astronomers &amp;amp; astrophotographers. It's quite popular in its niche and it is often praised for a well-designed UI with a focus on functionality. The image assets themselves, such as icons, are rather simple as I am not a designer. Here is the older (left) and newer (right) icon of the app:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwfi4ltzov3z4mzsu1426.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwfi4ltzov3z4mzsu1426.png" alt="PS Align" width="512" height="256"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Hopefully, with the right prompt, something that resembles the older &amp;amp; simpler icon on the left could be within the abilities of the AI engines.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;red crosshairs with circle around them, centered on the middle of the 7 stars of the Little Dipper, the Little Dipper should barely fit the circle, clip-art style&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzvas7tbap45kcqk8bipn.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzvas7tbap45kcqk8bipn.jpg" alt="PS Align 1" width="800" height="1021"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;ChatGPT gets the style very well, except in reverse colours - not sure why it went with black stars on white. It does not get the actual constellation - but no other generator did either, with Gemini being the closest in style. Grok did not do clip-art as instructed, while Image Creator is visually interesting but not close to what I was asking.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;&lt;/th&gt;
&lt;th&gt;&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;ChatGPT&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;5/10&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Gemini&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;3/10&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Image Creator&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;2/10&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Grok&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;2/10&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Xasteria Icon&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Next up, the icon / splash screen for the app &lt;a href="https://apps.apple.com/us/app/xasteria-astronomy-weather/id985030722" rel="noopener noreferrer"&gt;Xasteria Astronomy Weather&lt;/a&gt;:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqhr0vumet3kx29nnbpvk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqhr0vumet3kx29nnbpvk.png" alt="Xasteria" width="384" height="384"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;solid dark blue sky, having several yellow 4-pointed stars of various sizes, each designed using hyperbolic curves, but all with their points at top/bottom/left right orientation, a third of the sky covered by a dark grey mountainous range silhouette and a big white X that is the same shape as the 4-pointed stars but rotated 45 degrees and takes up 80% of the width of the scene, clip-art style&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx0exktawp3drgcnmlu2j.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx0exktawp3drgcnmlu2j.jpg" alt="Xasteria 1" width="800" height="1021"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;ChatGPT is the best on the first try. It got the stars and mountains right, only the X is almost, but not exactly right, going with parabolic instead of hyperbolic curves. Gemini's X is squared, so a bit worse. Grok changed its aspect ratio a bit for some reason (wider, and kept doing that for about half the subsequent tries), and gave me some weird stars and asymmetric X. Correct colours though, so I'd call it an improvement over the weird Microsoft attempt.&lt;/p&gt;

&lt;p&gt;To help Dall·E last year I had asked ChatGPT to optimize the prompt and I got this one to try:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;create an image of a serene landscape with a solid dark blue sky. Populate the sky with several yellow 4-pointed stars of various sizes, each designed using hyperbolic curves. Ensure that all stars have their points at top/bottom/left/right orientations. Dedicate one-third of the sky to a dark grey mountainous range silhouette. Additionally, include a prominent white X shape in the scene. The X should be the same shape as the 4-pointed stars but rotated by 45 degrees, taking up 80% of the width of the scene. The X should maintain the hyperbolic curve design.&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Froiwek7jydxo6ysg9wwi.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Froiwek7jydxo6ysg9wwi.jpg" alt="Xasteria 2" width="800" height="1021"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The results are mostly worse, except Gemini which gets the X with the hyperbolic curves perfectly in at least one attempt.&lt;/p&gt;

&lt;p&gt;The last attempt involved going back to the original prompt, but tweaking the description of the "X" to make it more explicit.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;solid dark blue sky, having several yellow 4-pointed stars of various sizes, each designed using hyperbolic curves, but all with their points at top/bottom/left right orientation, a third of the sky covered by a dark grey mountainous range silhouette, at the foreground a big white X that is also designed using hyperbolic curves and takes up 80% of the width of the scene, clip-art style&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6nc8qpvazp4ndyos6jdi.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6nc8qpvazp4ndyos6jdi.jpg" alt="Xasteria 3" width="800" height="1021"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;It actually did worse than the original here, except Gemini which is at a similar level and Dall·E which is a small improvement.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;&lt;/th&gt;
&lt;th&gt;&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;ChatGPT&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;8/10&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Gemini&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;9/10&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Image Creator&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;4/10&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Grok&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;5/10&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h3&gt;
  
  
  Blog Header Image
&lt;/h3&gt;

&lt;p&gt;After all these quite specific images, I thought I'd try more creative generation and see what the AI engines can come up with when given titles of articles - dev.to articles I've posted to be exact.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Google Cloud &amp;amp; AMD EPYC&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;First, is &lt;a href="https://dev.to/dkechag/google-cloud-c3d-review-record-breaking-performance-with-epyc-genoa-g13"&gt;the performance review&lt;/a&gt; of the latest AMD EPYC powered GCP instances. I did a simplified title last year as Duet AI was getting confused, I'll repeat the same:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;An image that can serve as a title for a Google Cloud and AMD EPYC presentation&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;You'll notice I say "title", when I really should have said "header" or similar. I did not notice, as the image generators did not take it literally in the last comparison - possibly because they were lousy with text. Here comes ChatGPT 4o though:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffz1i7ktqrvi33o6pbr4t.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffz1i7ktqrvi33o6pbr4t.png" alt="Google Cloud AMD EPYC - ChatGPT" width="800" height="800"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Text looks good and, interestingly, the top half is kind of what I went with myself. However, I will change "title" to "header" for this comparison, as I had expected some sort of graphics would be included:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe0oqnjayfglitefujvvl.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe0oqnjayfglitefujvvl.jpg" alt="Google Cloud AMD EPYC" width="800" height="1021"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;ChatGPT gave a very simple design, but it is just right, even getting Google's logo and the EPYC font right. Gemini is trying harder to impress, but modifies logos etc in the process. It does know a header image should be on a wide aspect ratio. Image Creator is an improvement from last year, no garbage text. Grok is just uninspired - generic. I'll base the points to what I gave Image Creator last year (I was a bit lenient again).&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;&lt;/th&gt;
&lt;th&gt;&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;ChatGPT&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;10/10&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Gemini&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;9/10&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Image Creator&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;7/10&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Grok&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;4/10&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Compute Cloud Provider Comparison&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Next was the &lt;a href="https://dev.to/dkechag/cloud-vm-performance-value-comparison-2023-perl-more-1kpp"&gt;Compute Cloud Provider performance and price comparison&lt;/a&gt;.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;an image that can be used as a header in a compute cloud provider price &amp;amp; performance comparison&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Futmqqvnqy9ncv3uiqx19.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Futmqqvnqy9ncv3uiqx19.jpg" alt="Cloud Comparison" width="800" height="1021"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;ChatGPT is again a simple design, gets the text right and the drawing is very on point. Gemini is usable as long as it does not try to add too much text, at which point we start getting gems like "COMPANES" (sic). Image Creator is similar to last year, no text attempted so usable results although a bit too "imaginative. Grok decided to give me a single image for the first time, and it's not great, as there are some weird typos in the title and the chart even weirder.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;&lt;/th&gt;
&lt;th&gt;&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;ChatGPT&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;10/10&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Gemini&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;6/10&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Image Creator&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;7/10&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Grok&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;2/10&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;This Article&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For this article a generic prompt was attempted:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;Header image for blog post: "AI Image Creation: ChatGPT vs Gemini vs DALL·E 3 vs Grok"&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuk2liq0wuctdwgq8u8xb.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuk2liq0wuctdwgq8u8xb.jpg" alt="Vs Blog" width="800" height="1021"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;ChatGPT was reasonably clear, the others rather disappointing, although Grok did get most of the text OK - these too were the only ones that could spell &lt;em&gt;DALL·E&lt;/em&gt;. Gemini and DALL·E could not spell anything. &lt;/p&gt;

&lt;p&gt;Since I didn't get good results, I gave more explicit prompts, such as asking for a painter writing a different phrase for each engine on a canvas:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;painter writing [name of ai service] on a canvas.&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjf5b2s4fbsg6nabmrwnl.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjf5b2s4fbsg6nabmrwnl.jpg" alt="Painter Canvas" width="800" height="1021"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;ChatGPT does well as usual, a bit artistic output. Gemini is great, with realistic images and correct spelling / Google logo. Microsoft's service gets the spelling of "Microsoft" slightly off in 2/4 tries. Grok gives me one version before the painter has written anything. Maybe it's intended as "progression"?&lt;/p&gt;

&lt;p&gt;I'll give average marks from the two attempts above (and the separate marks in parenthesis next to them).&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;&lt;/th&gt;
&lt;th&gt;&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;ChatGPT&lt;/td&gt;
&lt;td&gt;
&lt;strong&gt;9/10&lt;/strong&gt; (8+10)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Gemini&lt;/td&gt;
&lt;td&gt;
&lt;strong&gt;6/10&lt;/strong&gt; (1+10)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Image Creator&lt;/td&gt;
&lt;td&gt;
&lt;strong&gt;4/10&lt;/strong&gt; (2+6)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Grok&lt;/td&gt;
&lt;td&gt;
&lt;strong&gt;5/10&lt;/strong&gt; (3+6)&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h3&gt;
  
  
  Sloths with Headphones
&lt;/h3&gt;

&lt;p&gt;Finally, I tried something fun for my videoconferencing background. First thing that came to my mind was:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;sloths wearing headphones, photo-realistic&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fha7j2j148c82jemmyj2e.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fha7j2j148c82jemmyj2e.jpg" alt="Sloths" width="800" height="1021"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Gemini and Grok gave me what I wanted. Some playful, reasonably realistic sloths (plural) wearing headphones. ChatGPT gave me what looks like the passport photo sheet of a single sloth, not really natural or much realistic. Image Creator also had trouble with plural, half the attempts featured a single sloth. I did award a 10/10 last year, this is the second case I am revisiting to subtract 1, as I was too lenient and this year’s top two performers clearly improved upon last year's best:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;&lt;/th&gt;
&lt;th&gt;&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;ChatGPT&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;10/10&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Gemini&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;6/10&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Image Creator&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;7/10&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Grok&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;10/10&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Let's take a look at the cumulative scores:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Test&lt;/th&gt;
&lt;th&gt;ChatGPT&lt;/th&gt;
&lt;th&gt;Gemini&lt;/th&gt;
&lt;th&gt;Image Creator&lt;/th&gt;
&lt;th&gt;Grok&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Running Camel&lt;/td&gt;
&lt;td&gt;10&lt;/td&gt;
&lt;td&gt;10&lt;/td&gt;
&lt;td&gt;6&lt;/td&gt;
&lt;td&gt;4&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Camel   Glasses&lt;/td&gt;
&lt;td&gt;10&lt;/td&gt;
&lt;td&gt;10&lt;/td&gt;
&lt;td&gt;7&lt;/td&gt;
&lt;td&gt;9&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Astrophotos&lt;/td&gt;
&lt;td&gt;8&lt;/td&gt;
&lt;td&gt;2&lt;/td&gt;
&lt;td&gt;1&lt;/td&gt;
&lt;td&gt;3&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Binoculars&lt;/td&gt;
&lt;td&gt;10&lt;/td&gt;
&lt;td&gt;10&lt;/td&gt;
&lt;td&gt;1&lt;/td&gt;
&lt;td&gt;2&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Bino   Chair&lt;/td&gt;
&lt;td&gt;10&lt;/td&gt;
&lt;td&gt;9&lt;/td&gt;
&lt;td&gt;5&lt;/td&gt;
&lt;td&gt;6&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;6x6   Matrix&lt;/td&gt;
&lt;td&gt;9&lt;/td&gt;
&lt;td&gt;4&lt;/td&gt;
&lt;td&gt;1&lt;/td&gt;
&lt;td&gt;1&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;PS   Align Icon&lt;/td&gt;
&lt;td&gt;5&lt;/td&gt;
&lt;td&gt;3&lt;/td&gt;
&lt;td&gt;2&lt;/td&gt;
&lt;td&gt;2&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Xasteria   Icon&lt;/td&gt;
&lt;td&gt;8&lt;/td&gt;
&lt;td&gt;9&lt;/td&gt;
&lt;td&gt;4&lt;/td&gt;
&lt;td&gt;5&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;GCP   / EPYC&lt;/td&gt;
&lt;td&gt;10&lt;/td&gt;
&lt;td&gt;9&lt;/td&gt;
&lt;td&gt;7&lt;/td&gt;
&lt;td&gt;4&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Cloud   Comparison&lt;/td&gt;
&lt;td&gt;10&lt;/td&gt;
&lt;td&gt;6&lt;/td&gt;
&lt;td&gt;7&lt;/td&gt;
&lt;td&gt;2&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Drawing   Phrase&lt;/td&gt;
&lt;td&gt;9&lt;/td&gt;
&lt;td&gt;6&lt;/td&gt;
&lt;td&gt;4&lt;/td&gt;
&lt;td&gt;5&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Sloths&lt;/td&gt;
&lt;td&gt;10&lt;/td&gt;
&lt;td&gt;6&lt;/td&gt;
&lt;td&gt;7&lt;/td&gt;
&lt;td&gt;10&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Total&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;109/120&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;84/120&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;52/120&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;53/120&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;%&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;91%&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;70%&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;43%&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;44%&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;This year's follow-up confirms that AI image generation tools have made noticeable progress - from sub-50% scores, we got to at least one solution (subjectively) scoring over 90%.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;The top performer is ChatGPT, the new 4o model is the most dependable of all the solutions. It is the only one that can count, do technical drawings and the only one that has fully solved text rendering, while being the best at interpreting prompts. It went with the "less is more" approach, often giving the simplest, yet most appropriate image. It's the only one though that is not accessible in a free version. That may be just for now though, as it's quite new.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Gemini (Imagen 4) marked a clear improvement over last year’s Duet AI. It can mostly render text (not yet consistently) and can finally draw multiple binoculars without merging them into a monstrosity. It still has problems counting (with perhaps a small improvement over Duet AI) and misses fine prompt details, but it's available even on the free version. Plus, its integration into the Google Docs suite is a nice convenience.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Microsoft's Image Creator seems to use pretty much the same (DALL·E 3) engine like last year. It actually got lower marks, possibly due to luck - I only used the results of the first attempt, so some luck is involved. It's still good for creative results, but it's not accurate, can't do text, so it's rather limited for serious uses.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Grok (Grok 3) showed promise with photorealistic visuals and some artistic "flair", but was the most inconsistent, occasionally misinterpreting prompts, producing malformed compositions, or displaying contextual confusion.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Re-running the exact same 12 test rounds as last year highlighted that prompt interpretation, factual accuracy and visual clarity remain difficult to balance across models. Some were good in freeform design, others in precision, but none yet do both equally well, although ChatGPT got close. Still, the overall quality is clearly up from 2024.&lt;/p&gt;

&lt;p&gt;So, for most professional use cases, especially when accuracy matters, ChatGPT 4o can provide great results, with Gemini being a decent alternative most of the time. Image Creator remains a decent free option for creative use, while Grok does show some interesting potential - it often goes a different way compared to the others.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>openai</category>
      <category>google</category>
      <category>grok</category>
    </item>
    <item>
      <title>How Geekbench 6 Multicore Is Broken by Design</title>
      <dc:creator>Dimitrios Kechagias</dc:creator>
      <pubDate>Wed, 07 May 2025 00:39:29 +0000</pubDate>
      <link>https://forem.com/dkechag/how-geekbench-6-multicore-is-broken-by-design-44ig</link>
      <guid>https://forem.com/dkechag/how-geekbench-6-multicore-is-broken-by-design-44ig</guid>
      <description>&lt;p&gt;As a developer, performance is very important to me (clearly, I am not a front-end dev - hah!). It's crucial for the company I work for, too, affecting both cost and user experience.  I've been regularly performing and &lt;a href="https://dev.to/dkechag/cloud-provider-comparison-2024-vm-performance-price-3h4l"&gt;publishing cloud VM CPU comparisons&lt;/a&gt; to share my insights. Although my primary tool is my own &lt;a href="https://metacpan.org/pod/Benchmark::DKbench" rel="noopener noreferrer"&gt;DKbench suite&lt;/a&gt;, I tend to include Geekbench 5 in the comparison, mostly due to the abundance of available published results. This is despite Geekbench 6 being available for some time now. You might wonder why I haven't switched: Simply put, I found that Geekbench 6 multi-core is fundamentally "broken", and I thought I'd explain why.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Table of Contents:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Geekbench 6 Fails at Multi-Core Scaling&lt;/li&gt;
&lt;li&gt;Geekbench 6's Shared Task Model&lt;/li&gt;
&lt;li&gt;What Should a CPU Benchmark Measure?&lt;/li&gt;
&lt;li&gt;
Poor Implementation of Multi-threaded workloads

&lt;ul&gt;
&lt;li&gt;The Text Processing benchmark&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;Conclusion&lt;/li&gt;

&lt;/ul&gt;

&lt;h2&gt;
  
  
  Geekbench 6 Fails at Multi-Core Scaling
&lt;/h2&gt;

&lt;p&gt;Geekbench 6 barely scales on multi-core systems. This is unlike Geekbench 5, which, although not linear at scaling, continuously benefits from additional cores. To demonstrate, here is a comparison using Google Cloud C3D VMs with SMT disabled (vCPU = full core), showing the scaling behaviour of DKbench, Geekbench 5 and Geekbench 6 across 2 to 180 cores:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4blecutyk02exid374to.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4blecutyk02exid374to.png" alt="Geekbench 6 scaling" width="600" height="519"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The dotted line is theoretical max / ideal scaling.&lt;/p&gt;

&lt;p&gt;For clarity, here's the DKbench vs Geekbench 6 data in table form, along with Geekbench 6's efficiency versus ideal scaling:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Cores&lt;/th&gt;
&lt;th&gt;DKbench Scaling&lt;/th&gt;
&lt;th&gt;Geekbench 6 Scaling&lt;/th&gt;
&lt;th&gt;Geekbench 6 % of Ideal&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;2&lt;/td&gt;
&lt;td&gt;2.0&lt;/td&gt;
&lt;td&gt;1.8&lt;/td&gt;
&lt;td&gt;89.91%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;4&lt;/td&gt;
&lt;td&gt;4.0&lt;/td&gt;
&lt;td&gt;3.2&lt;/td&gt;
&lt;td&gt;79.92%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;8&lt;/td&gt;
&lt;td&gt;7.9&lt;/td&gt;
&lt;td&gt;4.9&lt;/td&gt;
&lt;td&gt;61.27%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;16&lt;/td&gt;
&lt;td&gt;15.2&lt;/td&gt;
&lt;td&gt;7.9&lt;/td&gt;
&lt;td&gt;49.54%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;32&lt;/td&gt;
&lt;td&gt;30.4&lt;/td&gt;
&lt;td&gt;10.5&lt;/td&gt;
&lt;td&gt;32.69%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;48&lt;/td&gt;
&lt;td&gt;45.5&lt;/td&gt;
&lt;td&gt;11.4&lt;/td&gt;
&lt;td&gt;23.66%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;64&lt;/td&gt;
&lt;td&gt;60.0&lt;/td&gt;
&lt;td&gt;12.1&lt;/td&gt;
&lt;td&gt;18.84%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;90&lt;/td&gt;
&lt;td&gt;82.6&lt;/td&gt;
&lt;td&gt;12.1&lt;/td&gt;
&lt;td&gt;13.46%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;180&lt;/td&gt;
&lt;td&gt;158.8&lt;/td&gt;
&lt;td&gt;10.3&lt;/td&gt;
&lt;td&gt;5.73%&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Geekbench 6 scaling starts poorly and pretty much flattens at 32-64 cores or more, at which point you get a performance of around 10-12x that of a single core. Performance shockingly even declines beyond 32 cores, with 180 cores performing worse than 32!&lt;br&gt;
For comparison, Geekbench 5 manages a 63x over single core performance on those 180 cores (and DKbench a cool 159x).&lt;/p&gt;

&lt;h2&gt;
  
  
  Geekbench 6's Shared Task Model
&lt;/h2&gt;

&lt;p&gt;Let's dive into some technical details to figure out the reason behind this behaviour. Geekbench helpfully &lt;a href="https://www.geekbench.com/doc/geekbench6-benchmark-internals.pdf" rel="noopener noreferrer"&gt;publishes some internal details&lt;/a&gt;, and there is a "Multi-Threading" section which explains:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Geekbench 6 uses a “shared task” model for multi-threading, rather than the “separate task” model used in earlier versions of Geekbench. The “shared task” approach better models how most applications use multiple cores.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Basically they say that in the previous versions of Geekbench, in Multi-core mode they would create more work to give to each core separately. In Geekbench 6 they have one task that they try to serve with multiple threads communicating with each other (IPC), which is indeed how, for example Photoshop would try to apply a filter on an image using more cores.&lt;/p&gt;

&lt;p&gt;There are some fundamental issues with this approach:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Home usage:&lt;/strong&gt; While the "shared task" approach may closely model a (specific) single app, typical user environments involve multiple apps at the same time, while the OS is running dozens of background tasks. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Server usage:&lt;/strong&gt; The "shared task" idea is even less relevant for most cases of servers made for parallel tasks (e.g., processing multiple users/images/etc concurrently), not singular tasks processed faster.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  What Should a CPU Benchmark Measure?
&lt;/h2&gt;

&lt;p&gt;Let's go back to the basics for a moment. What exactly is a CPU benchmark for? I'd say a useful CPU benchmark typically does one of two things::&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Application-Specific Benchmark:&lt;/strong&gt; Tests the performance of specific software. Ideal when workloads are predictable. This type of benchmarking, in multi-core context will tell you whether you can expect performance gains for your app by simply adding cores.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Generic Benchmark:&lt;/strong&gt; Measures general CPU capability by stressing all parts of the CPU in diverse workloads, offering insights into performance across various scenarios. In multi-core mode, these tests should similarly load all cores of the CPU, showing any the limitations of the processor design (lower "all core" boost speed, heat throttling etc).&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;There are some benchmarks that fall in-between. I.e. specific applications that are quite good at stressing a single or multiple cores of a CPU. The &lt;strong&gt;Cinebench&lt;/strong&gt; benchmark is such an example.&lt;/p&gt;

&lt;p&gt;Geekbench traditionally fit the generic benchmark category, providing useful rough comparisons, where the overall score usually agreed with what I would get from my custom benchmarks. Geekbench 6 breaks this in Multicore, for me it is no longer a generic benchmark:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Geekbench 6 Multicore simply measures the performance of Geekbench's particular implementation of very specific workloads.&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Poor Implementation of Multi-threaded workloads
&lt;/h2&gt;

&lt;p&gt;Even if we accept the premise of the "shared task" model, Geekbench 6 does a notably poor job implementing it, leading to &lt;em&gt;decreasing&lt;/em&gt; performance when adding cores. From the internals document it seems that their approach to multi-core scaling often involves arbitrary and fixed scaling of workloads, typically setting multi-core tasks at exactly four times the single-core workload, regardless of CPU size. This approach explains the respectable 80% scaling observed up to 4 cores. Realistically, competent multi-threaded software would dynamically scale concurrent workloads to match available cores.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Text Processing benchmark
&lt;/h3&gt;

&lt;p&gt;It gets worse than this. I looked for the least scalable benchmark of the suite, which (surpisingly, as I was expecting some sort of non-parallelizable algorithm) is "Text Processing":&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbqsznk3yzlgfjmgzjjwm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbqsznk3yzlgfjmgzjjwm.png" alt="Geekbench 6 Text Processing scaling" width="601" height="520"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Cores&lt;/th&gt;
&lt;th&gt;Text Processing Scaling&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;2&lt;/td&gt;
&lt;td&gt;1.182&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;4&lt;/td&gt;
&lt;td&gt;1.303&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;8&lt;/td&gt;
&lt;td&gt;1.346&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;16&lt;/td&gt;
&lt;td&gt;1.280&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;32&lt;/td&gt;
&lt;td&gt;1.300&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;48&lt;/td&gt;
&lt;td&gt;1.278&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;64&lt;/td&gt;
&lt;td&gt;1.277&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;90&lt;/td&gt;
&lt;td&gt;1.279&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;180&lt;/td&gt;
&lt;td&gt;1.274&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;This is so bizarre. Their "text processing" representative benchmark scales to only about 1.35x single-core performance, peaking at 8 cores, then declining afterward. &lt;/p&gt;

&lt;p&gt;It's even more bizarre if you read the internals doc:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;The Text Processing workload loads numerous files, parses the contents using regular expressions, stores metadata in a SQLite database, and finally exports the content to a different format. It models typical text processing tasks that manipulate, analyze, and transform data to reformat it for publication and to gain insights. The input and output files are stored using an in-memory encrypted file system.&lt;br&gt;
[...] and processes 190 Markdown files as its input.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;So they describe it as processing 190 Markdown files (using regular expressions, storing metadata in SQLite, and exporting results) while taking virtually no advantage of parallel processing! The nearly flat scaling strongly suggests severe implementation bottlenecks e.g. perhaps due to something like poorly managed global write locks on SQLite or similar that serializes the whole process. There are no more details to figure out what they did wrong, but they did do it very wrong!&lt;/p&gt;

&lt;p&gt;This benchmark literally implies CPUs with more than 4 cores provide no benefits for "text processing" tasks...&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Geekbench 6’s multi-core benchmark is not merely flawed, it's fundamentally broken (and mostly by design). Its adoption of the "shared task" model and poor implementation techniques make it an ineffective representation of any real-world CPU performance. For more realistic and scalable benchmarks, I'd say stick to Geekbench 5, or try others - maybe even give &lt;a href="https://metacpan.org/pod/Benchmark::DKbench" rel="noopener noreferrer"&gt;DKbench&lt;/a&gt; a try (there's a &lt;a href="https://hub.docker.com/r/dkechag/dkbench" rel="noopener noreferrer"&gt;docker version&lt;/a&gt; so you don't have to install anything).&lt;/p&gt;

</description>
      <category>benchmarking</category>
      <category>geekbench</category>
      <category>cpu</category>
    </item>
    <item>
      <title>C4D VMs on Google Cloud: Breaking Records Again with EPYC Turin</title>
      <dc:creator>Dimitrios Kechagias</dc:creator>
      <pubDate>Wed, 16 Apr 2025 23:32:32 +0000</pubDate>
      <link>https://forem.com/dkechag/c4d-vms-on-google-cloud-breaking-records-again-with-epyc-turin-14ic</link>
      <guid>https://forem.com/dkechag/c4d-vms-on-google-cloud-breaking-records-again-with-epyc-turin-14ic</guid>
      <description>&lt;p&gt;Over the past year, Google Cloud has been rolling out its fourth-gen compute VMs. First came the Intel-powered &lt;strong&gt;c4&lt;/strong&gt;, which did much better than past Intel instances in my &lt;a href="https://dev.to/dkechag/cloud-provider-comparison-2024-vm-performance-price-3h4l"&gt;2024 Cloud Comparison&lt;/a&gt;, but ultimately wasn't a significant leap compared to Google's own excellent &lt;a href="https://dev.to/dkechag/google-cloud-c3d-review-record-breaking-performance-with-epyc-genoa-g13"&gt;&lt;strong&gt;c3d&lt;/strong&gt;&lt;/a&gt;. The Google Axion-powered &lt;strong&gt;c4a&lt;/strong&gt; ARM instances were released next, showing &lt;a href="https://dev.to/dkechag/google-axion-a-new-leader-in-arm-server-performance-4im9"&gt;excellent multi-threaded performance and value&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Last week, during Google Next, the &lt;strong&gt;5th Gen&lt;/strong&gt; AMD EPYC (Zen 5) &lt;strong&gt;c4d&lt;/strong&gt; instances &lt;a href="https://cloud.google.com/blog/products/compute/delivering-new-compute-innovations-and-offerings" rel="noopener noreferrer"&gt;were announced&lt;/a&gt;. You can follow the announcement link to apply for a preview, or wait a bit longer until general availability.&lt;br&gt;
I've been testing them in preview for &lt;a href="https://www.spareroom.co.uk" rel="noopener noreferrer"&gt;SpareRoom&lt;/a&gt; and they seem to be one of the most impressive releases in recent memory.&lt;/p&gt;

&lt;p&gt;Table of contents:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The Comparison&lt;/li&gt;
&lt;li&gt;
Performance per 2x vCPU

&lt;ul&gt;
&lt;li&gt;DKbench Suite&lt;/li&gt;
&lt;li&gt;FFmpeg Video Compression&lt;/li&gt;
&lt;li&gt;Linux Kernel Compilation&lt;/li&gt;
&lt;li&gt;OpenSSL (AVX 512)&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
Performance of Full Size Instances

&lt;ul&gt;
&lt;li&gt;DKbench&lt;/li&gt;
&lt;li&gt;7zip Compression&lt;/li&gt;
&lt;li&gt;Compilation&lt;/li&gt;
&lt;li&gt;OpenSSL&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Conclusion&lt;/li&gt;
&lt;li&gt;
Addendum: Test methodology

&lt;ul&gt;
&lt;li&gt;Benchmark::DKbench&lt;/li&gt;
&lt;li&gt;OpenBenchmarking.org (phoronix test suite)&lt;/li&gt;
&lt;li&gt;FFmpeg compression test&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  The Comparison
&lt;/h2&gt;

&lt;p&gt;The new &lt;strong&gt;c4d&lt;/strong&gt; comes in 3 variants, &lt;code&gt;standard&lt;/code&gt; with &lt;strong&gt;3.5GB&lt;/strong&gt; RAM/vCPU, &lt;code&gt;highcpu&lt;/code&gt; with &lt;strong&gt;1.5GB&lt;/strong&gt;/vCPU and &lt;code&gt;highmem&lt;/code&gt; with &lt;strong&gt;7.5GB&lt;/strong&gt;/vCPU. vCPUs are SMT threads (2 threads per physical core), and you can specify from &lt;strong&gt;2&lt;/strong&gt; vCPUs to a massive &lt;strong&gt;384&lt;/strong&gt;, which is &lt;strong&gt;2&lt;/strong&gt; processors of &lt;strong&gt;96&lt;/strong&gt; SMT cores (&lt;strong&gt;192&lt;/strong&gt; threads) each. As usual, it's a custom Zen 5 chip (&lt;strong&gt;EPYC 9B45&lt;/strong&gt;) with a core clock of &lt;strong&gt;4.1GHz&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;I've set up a comparison picking the most relevant GCP VM types: &lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Type&lt;/th&gt;
&lt;th&gt;CPU&lt;/th&gt;
&lt;th&gt;vCPUs&lt;/th&gt;
&lt;th&gt;RAM / vCPU&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;c4d&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;AMD EPYC Turin&lt;/td&gt;
&lt;td&gt;2-384&lt;/td&gt;
&lt;td&gt;1.5-7.5GB&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;c3d&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;AMD EPYC Genoa&lt;/td&gt;
&lt;td&gt;4-360&lt;/td&gt;
&lt;td&gt;2-8GB&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;t2d&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;AMD EPYC Milan&lt;/td&gt;
&lt;td&gt;1-60&lt;/td&gt;
&lt;td&gt;4GB&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;c4&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Intel Emerald Rapids&lt;/td&gt;
&lt;td&gt;2-192&lt;/td&gt;
&lt;td&gt;2-7.5GB&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;c3&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Intel Sapphire Rapids&lt;/td&gt;
&lt;td&gt;4-176&lt;/td&gt;
&lt;td&gt;2-8GB&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;n2&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Intel Ice Lake&lt;/td&gt;
&lt;td&gt;2-128&lt;/td&gt;
&lt;td&gt;1-8GB&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;c4a&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Google Axion&lt;/td&gt;
&lt;td&gt;1-72&lt;/td&gt;
&lt;td&gt;2-8GB&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;
&lt;h2&gt;
  
  
  Performance per 2x vCPU
&lt;/h2&gt;

&lt;p&gt;For an apples-to-apples comparison, I will use 2x vCPU instances. For &lt;strong&gt;c3&lt;/strong&gt;/&lt;strong&gt;c3d&lt;/strong&gt; Google's "visible cores" limiter is necessary, as 4x vCPU is the minimum. For most instances 2x vCPU means 1 core / 2 hyper-threads, except &lt;strong&gt;t2d&lt;/strong&gt; and &lt;strong&gt;c4a&lt;/strong&gt; which have no SMT/HyperThreading. That gives an advantage to those two types for multi-threaded loads, as they have twice the physical cores, but you don't pay more as the per vCPU price is not very different.&lt;br&gt;
Note that I swapped the &lt;strong&gt;t2d&lt;/strong&gt; with the &lt;strong&gt;n2d&lt;/strong&gt; for EPYC Milan for this, as the latter can go up to 112 cores (224 threads) vs just 60 for &lt;strong&gt;t2d&lt;/strong&gt;.&lt;/p&gt;
&lt;h3&gt;
  
  
  DKbench Suite
&lt;/h3&gt;

&lt;p&gt;The &lt;a href="https://github.com/dkechag/Benchmark-DKbench" rel="noopener noreferrer"&gt;DKbench CPU test suite&lt;/a&gt; is used to measure performance of the type of generic compute workloads we run on our servers. The test methodology is available as an addendum at the end of this post. The way to read the chart is that you are looking at green for max performance (single-thread load), and orange for value (multi-thread load on a 2x vCPU instance):&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8p0517ecpe1te5oi81ed.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8p0517ecpe1te5oi81ed.png" alt="DKbench 2x vCPU" width="800" height="800"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you've seen my multi-cloud comparisons, you'll recognise that this is by far the fastest single core performance on a cloud server. It is over &lt;strong&gt;40%&lt;/strong&gt; higher than the fastest third gen (&lt;strong&gt;c3d&lt;/strong&gt;). It's also almost &lt;strong&gt;20%&lt;/strong&gt; faster than the &lt;strong&gt;c4&lt;/strong&gt;, a gap that widens in multi-threaded loads (as usual, Intel's HT is not as efficient).&lt;br&gt;
The value of non-HT types is also evident, an Axion &lt;strong&gt;c4a&lt;/strong&gt; core is &lt;strong&gt;40%&lt;/strong&gt; slower, however you pay per vCPU and you get a full core per each, which gives you more performance if your workload can fully utilize them. The aging &lt;strong&gt;t2d&lt;/strong&gt; Milan is similar - much slower per core, but can still compete in value, which is why we use it for several applications to this day.&lt;/p&gt;

&lt;p&gt;Moving on to some less generic benchmarks that use both vCPUs:&lt;/p&gt;
&lt;h3&gt;
  
  
  FFmpeg Video Compression
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmm1nqlzgka47pgvd3vdx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmm1nqlzgka47pgvd3vdx.png" alt="FFmpeg 2x vCPU" width="800" height="800"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In the FFmpeg/lib264 compression test, &lt;strong&gt;c4d&lt;/strong&gt; does at least as good with over &lt;strong&gt;40%&lt;/strong&gt; gains over &lt;strong&gt;c3d&lt;/strong&gt;. Thanks to better SMT it even manages a &lt;strong&gt;35%&lt;/strong&gt; speed advantage over &lt;strong&gt;c4&lt;/strong&gt; in the multi-threaded comparison.&lt;/p&gt;
&lt;h3&gt;
  
  
  Linux Kernel Compilation
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr8mhc4707kalw7v5lh53.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr8mhc4707kalw7v5lh53.png" alt="Compilation 2x vCPU" width="800" height="800"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The Linux Kernel Compilation benchmark is not CPU only, the rest of the system, including the disk, is involved, but still we see close to &lt;strong&gt;20%&lt;/strong&gt; and &lt;strong&gt;40%&lt;/strong&gt; gains over &lt;strong&gt;c4&lt;/strong&gt; and &lt;strong&gt;c3d&lt;/strong&gt; respectively.&lt;/p&gt;
&lt;h3&gt;
  
  
  OpenSSL (AVX 512)
&lt;/h3&gt;

&lt;p&gt;The OpenSSL RSA4096 encryption test is AVX-512 heavy. Here is an idea of how &lt;strong&gt;c4d&lt;/strong&gt; performs on such workloads:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgsgy9f7drsembqw1cphu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgsgy9f7drsembqw1cphu.png" alt="OpenSSL 2x vCPU" width="800" height="800"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We see &lt;strong&gt;25%&lt;/strong&gt; gains over &lt;strong&gt;c3d&lt;/strong&gt; and &lt;strong&gt;17%&lt;/strong&gt; over &lt;strong&gt;c4&lt;/strong&gt;. Not as impressive as the other benchmarks, but still significant.&lt;/p&gt;

&lt;p&gt;While I consider the 2x vCPU tests the most telling, as they make it easy for me to extrapolate price &amp;amp; performance of our commonly used 4x-32x vCPU instances, there will be applications where a full-size instance will be used. So let’s see what a whole Turin processor (or two) can do.&lt;/p&gt;
&lt;h2&gt;
  
  
  Performance of Full Size Instances
&lt;/h2&gt;

&lt;p&gt;As mentioned above, the &lt;strong&gt;c4d&lt;/strong&gt; maxes out at &lt;strong&gt;192&lt;/strong&gt; cores / &lt;strong&gt;384&lt;/strong&gt; threads - a modest bump over the previous maximum of &lt;strong&gt;180&lt;/strong&gt; cores / &lt;strong&gt;360&lt;/strong&gt; threads on the &lt;strong&gt;c3d&lt;/strong&gt;. However, paired with faster cores, I expected standout performance - and the &lt;strong&gt;c4d&lt;/strong&gt; didn’t disappoint. &lt;/p&gt;
&lt;h3&gt;
  
  
  DKbench
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9vtfe5ca228ngz1wjb4r.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9vtfe5ca228ngz1wjb4r.png" alt="DKbench" width="800" height="800"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The full-size &lt;strong&gt;c4d&lt;/strong&gt; delivers over &lt;strong&gt;30%&lt;/strong&gt; more processing power compared to the already-impressive 360-thread &lt;strong&gt;c3d&lt;/strong&gt;. Even more remarkable, a single 96-core (192-thread) EPYC Turin processor in &lt;strong&gt;c4d&lt;/strong&gt; outperforms the Intel-powered &lt;strong&gt;c4&lt;/strong&gt;'s full 192-thread configuration by over &lt;strong&gt;40%&lt;/strong&gt;.&lt;/p&gt;
&lt;h3&gt;
  
  
  7zip Compression
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl4dpcmkbagonsspm87mz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl4dpcmkbagonsspm87mz.png" alt="7zip" width="800" height="800"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;At a compression speed of 1,732,434 MIPS it breaks the current openbenchmarking.org record of 1,658,291 which is held by a 2x EPYC 6845 system (320 cores / 640 threads!).&lt;/p&gt;
&lt;h3&gt;
  
  
  Compilation
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F61q4miqtnh5n54tqpq1w.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F61q4miqtnh5n54tqpq1w.png" alt="Compilation" width="800" height="800"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;At 18.3s it matches the current best times on openbenchmarking.org which come from other 2-processor AMD Zen 5 systems.&lt;/p&gt;
&lt;h3&gt;
  
  
  OpenSSL
&lt;/h3&gt;

&lt;p&gt;Let's look again at this AVX 512-capable benchmark:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3g6cmatniwdauyvxl29h.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3g6cmatniwdauyvxl29h.png" alt="OpenSSL" width="800" height="800"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The &lt;strong&gt;c4d&lt;/strong&gt; brings another &lt;strong&gt;25%&lt;/strong&gt; of performance over the &lt;strong&gt;c3d&lt;/strong&gt;, similar to what we saw on the 2x vCPU test.&lt;/p&gt;
&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;The &lt;strong&gt;c4d&lt;/strong&gt; is the kind of new release we are always hoping for, essentially providing a one-click (or single-line code change) ~&lt;strong&gt;40%&lt;/strong&gt; performance boost. Historically, moving to newer CPU generations on Google Cloud typically doesn’t come with increased cost, as maintaining older, less efficient hardware is generally more expensive for providers. Pricing isn’t confirmed yet, but traditionally GCP does not price AMD instances higher than Intel ones, so at significantly better performance than the &lt;strong&gt;c4&lt;/strong&gt;, it should offer great value for most applications. Exceptions may include cases where you maintain a full load across all vCPUs and prioritize price-performance ratio - there, non-SMT instances like the Axion &lt;strong&gt;c4a&lt;/strong&gt; (or even the older EPYC Milan &lt;strong&gt;t2d&lt;/strong&gt;), could have the edge.&lt;/p&gt;
&lt;h2&gt;
  
  
  Addendum: Test methodology
&lt;/h2&gt;

&lt;p&gt;You can &lt;a href="https://docs.google.com/spreadsheets/d/1N-pmAd-CubnL5hon6KzCQC2_yh45mnXDb_lIsb-sYQE/edit?usp=sharing" rel="noopener noreferrer"&gt;find the full benchmark results listed here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;All instances were set up with a 10GB "Hyperdisk" or "SSD persistent disk" using Google's Debian 12 bookworm image in us-east-4.&lt;/p&gt;

&lt;p&gt;Some system packages were installed to support the DKbench and phoronix test suites:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo su
apt-get update
apt install -y wget build-essential cpanminus libxml-simple-perl php-cli php-xml php-zip
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Benchmark::DKbench
&lt;/h3&gt;

&lt;p&gt;The &lt;a href="https://metacpan.org/pod/Benchmark::DKbench" rel="noopener noreferrer"&gt;DKbench suite&lt;/a&gt; recorded 5 iterations over 19 benchmarks. If you have followed my previous cloud performance comparisons, it is a suite based in perl and C/XS, which is meant to evaluate the performance of generic CPU workloads that the typical SpareRoom job or web server will run. It is very scalable, which is good for evaluating massive VMs.&lt;/p&gt;

&lt;p&gt;To setup the benchmark with a standardized environment you would do:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cpanm -n BioPerl Benchmark::DKbench
setup_dkbench -f
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To run (5 iterations):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;dkbench -i 5
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  OpenBenchmarking.org (phoronix test suite)
&lt;/h3&gt;

&lt;p&gt;To setup the &lt;a href="https://www.phoronix-test-suite.com/" rel="noopener noreferrer"&gt;phoronix test suite&lt;/a&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;wget https://phoronix-test-suite.com/releases/phoronix-test-suite-10.8.4.tar.gz
tar xvfz phoronix-test-suite-10.8.4.tar.gz
cd phoronix-test-suite
./install-sh
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To run the benchmarks I used:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;phoronix-test-suite benchmark compress-7zip
phoronix-test-suite benchmark build-linux-kernel
phoronix-test-suite benchmark openssl
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  FFmpeg compression test
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# For ARM instances - replace 'arm64' with 'amd64' for x86:

wget https://johnvansickle.com/ffmpeg/releases/ffmpeg-release-arm64-static.tar.xz
tar -xJf ffmpeg-release-arm64-static.tar.xz --wildcards --no-anchored 'ffmpeg' -O &amp;gt; /usr/bin/ffmpeg
chmod +x /usr/bin/ffmpeg
wget https://download.blender.org/peach/bigbuckbunny_movies/big_buck_bunny_720p_h264.mov
time ffmpeg -i big_buck_bunny_720p_h264.mov -c:v libx264 -threads 1 out264a.mp4
time ffmpeg -i big_buck_bunny_720p_h264.mov -c:v libx264 -threads 2 out264b.mp4
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



</description>
      <category>googlecloud</category>
      <category>amd</category>
      <category>gcp</category>
      <category>cloud</category>
    </item>
    <item>
      <title>Benchmark CPUs Easily with the dkbench Docker image</title>
      <dc:creator>Dimitrios Kechagias</dc:creator>
      <pubDate>Thu, 27 Mar 2025 02:21:48 +0000</pubDate>
      <link>https://forem.com/dkechag/benchmark-cpus-easily-with-the-dkbench-docker-image-462k</link>
      <guid>https://forem.com/dkechag/benchmark-cpus-easily-with-the-dkbench-docker-image-462k</guid>
      <description>&lt;p&gt;I developed &lt;a href="https://metacpan.org/pod/Benchmark::DKbench" rel="noopener noreferrer"&gt;Benchmark::DKbench&lt;/a&gt; to use in my &lt;a href="https://dev.to/dkechag/cloud-vm-benchmarks-2026-performance-price-1i1m"&gt;Cloud VM CPU comparisons&lt;/a&gt;. It's a great tool for general CPU benchmarking. In single-thread it correlates quite well with SPEC or Geekbench 5+, but it also scales efficiently to hundreds of cores on large cloud VMs and is open source. The only drawback was some setup required (a working Perl system and some basic libraries and CPAN modules).&lt;/p&gt;

&lt;p&gt;To simplify the process and ensure a consistent benchmarking environment across different systems, I created a ready-to-use &lt;a href="//hub.docker.com/r/dkechag/dkbench"&gt;Docker image&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Quick Start
&lt;/h2&gt;

&lt;p&gt;If you have Docker installed, you can be benchmarking in seconds:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker run &lt;span class="nt"&gt;-it&lt;/span&gt; &lt;span class="nt"&gt;--rm&lt;/span&gt; dkechag/dkbench
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It will download, run and connect to the container from where you simply run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;dkbench
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That's it!&lt;/p&gt;

&lt;h2&gt;
  
  
  Docker Image Details
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Docker Hub: &lt;a href="https://hub.docker.com/r/dkechag/dkbench" rel="noopener noreferrer"&gt;hub.docker.com/r/dkechag/dkbench&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;GitHub Source: &lt;a href="https://github.com/dkechag/dkbench-docker" rel="noopener noreferrer"&gt;github.com/dkechag/dkbench-docker&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Architectures: &lt;code&gt;amd64&lt;/code&gt;, &lt;code&gt;arm64&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Sample Output
&lt;/h2&gt;

&lt;p&gt;Output of base command on a &lt;code&gt;c4a-highcpu-72&lt;/code&gt; (Google Axion) VM:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;--------------- Software ---------------
DKbench v3.00
Perl v5.36.0 (threads, multi)
OS: Debian GNU/Linux 12.10 (bookworm)
--------------- Hardware ---------------
CPU type:  (aarch64)
CPUs: 72 (72 Cores)
----------------------------------------
DKbench single-thread run:
Benchmark           Score               Pass/Fail
Astro:               1108               Pass
BioPerl Monomers:    1090               Pass
CSS::Inliner:        1135               Pass
Crypt::JWT:          1201               Pass
DBI/SQL:             1385               Pass
DateTime:            1416               Pass
Digest:               925               Pass
Encode:              1302               Pass
HTML::FormatText:    1121               Pass
Imager:              1344               Pass
JSON::XS:            1227               Pass
Math::DCT:           1211               Pass
Math::MatrixReal:     937               Pass
Moose:               1138               Pass
Moose prove:         1354               Pass
Primes:              1071               Pass
Regex/Subst:         1073               Pass
Regex/Subst utf8:    1164               Pass
Text::Levenshtein:   1282               Pass
Overall Score:       1183
----------------------------------------
DKbench 72-thread run:
Benchmark           Score               Pass/Fail
Astro:              79101               Pass
BioPerl Monomers:   77803               Pass
CSS::Inliner:       77056               Pass
Crypt::JWT:         86454               Pass
DBI/SQL:            100131              Pass
DateTime:           101748              Pass
Digest:             66419               Pass
Encode:             89550               Pass
HTML::FormatText:   75762               Pass
Imager:             95798               Pass
JSON::XS:           88109               Pass
Math::DCT:          86547               Pass
Math::MatrixReal:   67094               Pass
Moose:              81572               Pass
Moose prove:        76786               Pass
Primes:             63421               Pass
Regex/Subst:        76028               Pass
Regex/Subst utf8:   83210               Pass
Text::Levenshtein:  91956               Pass
Overall Score:      82344
----------------------------------------
Multi thread Scalability:
Benchmark               Multi perf xSingle      Multi scalability %
Astro:                  71.41                   99
BioPerl Monomers:       71.36                   99
CSS::Inliner:           67.88                   94
Crypt::JWT:             71.98                   100
DBI/SQL:                72.32                   100
DateTime:               71.87                   100
Digest:                 71.78                   100
Encode:                 68.76                   96
HTML::FormatText:       67.59                   94
Imager:                 71.27                   99
JSON::XS:               71.81                   100
Math::DCT:              71.48                   99
Math::MatrixReal:       71.58                   99
Moose:                  71.67                   100
Moose prove:            56.72                   79
Primes:                 59.23                   82
Regex/Subst:            70.88                   98
Regex/Subst utf8:       71.49                   99
Text::Levenshtein:      71.70                   100
----------------------------------------
DKbench summary (19 benchmarks, 72 threads):
Single:              1183
Multi:              82344
Multi/Single perf:  70.99x  (67.59 - 72.32)
Multi scalability:  98.6%   (94% - 100%)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Happy benchmarking!&lt;/p&gt;

</description>
      <category>perl</category>
      <category>performance</category>
      <category>docker</category>
      <category>benchmarking</category>
    </item>
    <item>
      <title>Create a static mirror of your DEV blog</title>
      <dc:creator>Dimitrios Kechagias</dc:creator>
      <pubDate>Mon, 24 Mar 2025 01:32:14 +0000</pubDate>
      <link>https://forem.com/dkechag/create-a-static-mirror-of-your-dev-blog-d6a</link>
      <guid>https://forem.com/dkechag/create-a-static-mirror-of-your-dev-blog-d6a</guid>
      <description>&lt;p&gt;I started using DEV at the suggestion of &lt;a href="https://perlweekly.com/" rel="noopener noreferrer"&gt;Perl Weekly&lt;/a&gt;, and I was quite pleased with it - until I discovered that links to dev.to are effectively "shadowbanned" on several major platforms (Reddit, Hacker News, etc.). Posts containing DEV URLs would simply not be shown to users, making it impossible to share content effectively.&lt;/p&gt;

&lt;p&gt;To work around this, I thought I would need a way to publish my DEV articles on my own domain so I could freely share them. There are some DEV tutorials out there that explain how to consume the API using frontend frameworks like React, however I don't enjoy frontend at all and I did not want to spend much time on that.&lt;/p&gt;

&lt;p&gt;My solution was to get a simple Perl script that builds static versions of the articles, along with an index page. A Perl 5 script will run anywhere, including an old shared linux hosting account I still keep on IONOS, and I really like the speed of static sites.&lt;/p&gt;

&lt;p&gt;I thought this is an ideal task to start with ChatGPT. Indeed, after 10-15 mins in a few prompts I had a sort-of working solution without having to open up the API documentation, nor writing any CSS. I then spent an hour or two fixing bugs, refactoring some of the ancient-looking code and tweaking / adding features (e.g. tag index pages) - tasks that I enjoy.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://devblog.ecuadors.net/" rel="noopener noreferrer"&gt;Here is the result&lt;/a&gt;. You can find the Perl script and assets &lt;a href="https://github.com/dkechag/dev_to_static" rel="noopener noreferrer"&gt;in this repo&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;It will run on pretty much any Perl 5 version (tested down to 5.10) with some basic CPAN modules (&lt;a href="https://metacpan.org/pod/LWP::UserAgent" rel="noopener noreferrer"&gt;LWP::UserAgent&lt;/a&gt;, &lt;a href="https://metacpan.org/pod/JSON" rel="noopener noreferrer"&gt;JSON&lt;/a&gt;, &lt;a href="https://metacpan.org/pod/Path::Tiny" rel="noopener noreferrer"&gt;Path::Tiny&lt;/a&gt;).&lt;/p&gt;

&lt;p&gt;To use it, check the project out from the repo and specify at least your DEV user name when calling the script:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;./dev_to_static.pl &lt;span class="nt"&gt;-u&lt;/span&gt; &amp;lt;username&amp;gt;

&lt;span class="c"&gt;# or to also specify a target directory and name your blog:&lt;/span&gt;
./dev_to_static.pl &lt;span class="nt"&gt;-u&lt;/span&gt; &amp;lt;username&amp;gt; &lt;span class="nt"&gt;-t&lt;/span&gt; &amp;lt;directory&amp;gt; &lt;span class="nt"&gt;-title&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"Blog name"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Try option &lt;code&gt;-h&lt;/code&gt; to get more help.&lt;/p&gt;

&lt;p&gt;You can run it on your web host directly, or run locally and copy the resulting directory to your host.&lt;/p&gt;

&lt;p&gt;A nightly cron can update the site with new articles.&lt;/p&gt;

</description>
      <category>devto</category>
      <category>perl</category>
      <category>programming</category>
    </item>
    <item>
      <title>M2 vs M4 Mac Mini: Is it Worth the Upgrade?</title>
      <dc:creator>Dimitrios Kechagias</dc:creator>
      <pubDate>Tue, 11 Mar 2025 13:58:04 +0000</pubDate>
      <link>https://forem.com/dkechag/m2-to-m4-mac-mini-is-it-worth-the-upgrade-3ojd</link>
      <guid>https://forem.com/dkechag/m2-to-m4-mac-mini-is-it-worth-the-upgrade-3ojd</guid>
      <description>&lt;p&gt;Almost two years ago, I upgraded from an &lt;a href="https://blogs.perl.org/users/dimitrios_kechagias/2021/04/perl-performance-on-apple-m1.html" rel="noopener noreferrer"&gt;M1 Mac Mini&lt;/a&gt; to the M2 Mac Mini and &lt;a href="https://dev.to/dkechag/m1-pro-vs-m2-can-the-15-macbook-air-replace-a-macbook-pro-11ei"&gt;compared it&lt;/a&gt; to my M1 Pro MacBook. The M2 was a modest 10-20% improvement over the M1, but it was far from the M1 Pro's multi-core performance - for demanding workloads, it wasn't a significant upgrade.&lt;/p&gt;

&lt;p&gt;Now, Apple has skipped the M3 entirely and jumped straight to the M4 Mac Mini. On paper, it's already a more attractive package: the base model finally starts with 16GB RAM instead of 8GB (with no price increase), addressing a major criticism, and we are getting a CPU with faster clock, an extra 2 cores and faster RAM:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;CPU&lt;/th&gt;
&lt;th&gt;M2&lt;/th&gt;
&lt;th&gt;M4&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Transistors&lt;/td&gt;
&lt;td&gt;20B&lt;/td&gt;
&lt;td&gt;28B&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Max CPU Clock&lt;/td&gt;
&lt;td&gt;3.49 GHz&lt;/td&gt;
&lt;td&gt;4.4 GHz&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;CPU Cores&lt;/td&gt;
&lt;td&gt;4+4&lt;/td&gt;
&lt;td&gt;4+6&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;GPU Cores&lt;/td&gt;
&lt;td&gt;10&lt;/td&gt;
&lt;td&gt;10&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;RAM Bandwidth&lt;/td&gt;
&lt;td&gt;100 GB/s&lt;/td&gt;
&lt;td&gt;120 GB/s&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;The CPU spec differences may not seem spectacular, but let's have a look at how they translate into real-world performance gains and whether they might make the M4 a worthy upgrade from an M1 or M2.&lt;/p&gt;

&lt;h2&gt;
  
  
  CPU Comparison
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Synthetic CPU Benchmarks
&lt;/h3&gt;

&lt;p&gt;I will start with a couple of generic CPU benchmarks, my own &lt;a href="https://metacpan.org/dist/Benchmark-DKbench" rel="noopener noreferrer"&gt;DKbench&lt;/a&gt; and the popular GeekBench 5 (note I don't use version 6 because IMHO it's broken for multi-core):&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftq9w3lubnq54m7ps9bte.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftq9w3lubnq54m7ps9bte.png" alt="Image description" width="600" height="371"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The M4 dominates more than the CPU spec comparison table would indicate, it looks like the M4 core has an improved IPC over the M2. The single core advantage at 30-35% is quite a bit more than the clock speed increase and the multi-core performance difference touched 50% in DKbench. However, under full load on the multi-core DKbench, the M4's fan gets noticeably (although not annoyingly) loud. But we'll get into that again later. For now, on to 3D rendering:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj9p2tflpctziajkarr04.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj9p2tflpctziajkarr04.png" alt="Image description" width="600" height="371"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Here, the M4 is over 50% faster in single-core and over 60% faster in multi-core, an amazing showing.&lt;br&gt;
The M4 also supports the GPU Cinebench benchmark where it scores 3971 - the M2 was not supported by this Cinebench release in order to compare.&lt;/p&gt;

&lt;h3&gt;
  
  
  Real-World Developer Workloads
&lt;/h3&gt;

&lt;p&gt;On to real-world tasks that concern developers. I timed decompressing the Xcode 16.2 XIP file, running static analysis on Xcode for my &lt;a href="https://apps.apple.com/us/app/polar-scope-align-pro/id970161373" rel="noopener noreferrer"&gt;my astronomy app&lt;/a&gt; and compiling it (4 times to get a comparable sum):&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fey8tdr8vau07zay1373n.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fey8tdr8vau07zay1373n.png" alt="Image description" width="600" height="371"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The most useful (to us devs, as we do it multiple times per day) task - compilation - is the most impressive one, with an over 50% performance gain. It should be clear by now, the performance difference is real and would be noticeable in daily work.&lt;/p&gt;

&lt;h3&gt;
  
  
  Video Compression
&lt;/h3&gt;

&lt;p&gt;As the Mac Mini is ideal for a media center, video transcoding would be a common task. I am trying an FFmpeg H264 transcode (same method as &lt;a href="https://dev.to/dkechag/google-axion-a-new-leader-in-arm-server-performance-4im9#test-setup"&gt;here&lt;/a&gt;), plus a Handbrake SuperHQ HEVC transcode of the same video:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmtmdihhxvkynfwe8lee8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmtmdihhxvkynfwe8lee8.png" alt="Image description" width="600" height="371"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The single-core improvement is not bad at 23%, but again the multi-core performance at around 50% is very impressive.&lt;/p&gt;

&lt;h3&gt;
  
  
  CPU Result Table
&lt;/h3&gt;

&lt;p&gt;Here are all the above results collected in table form:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Single-Core Benchmarks&lt;/th&gt;
&lt;th&gt;M2&lt;/th&gt;
&lt;th&gt;M4&lt;/th&gt;
&lt;th&gt;M4 Advantage&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;DKbench 2.9&lt;/td&gt;
&lt;td&gt;1430&lt;/td&gt;
&lt;td&gt;1847&lt;/td&gt;
&lt;td&gt;29.2%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Geekbench 5&lt;/td&gt;
&lt;td&gt;1940&lt;/td&gt;
&lt;td&gt;2623&lt;/td&gt;
&lt;td&gt;35.2%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Cinebench 2024&lt;/td&gt;
&lt;td&gt;113&lt;/td&gt;
&lt;td&gt;174&lt;/td&gt;
&lt;td&gt;54.0%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;FFmpeg x264&lt;/td&gt;
&lt;td&gt;306.8&lt;/td&gt;
&lt;td&gt;248.4&lt;/td&gt;
&lt;td&gt;23.5%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Average&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;35.5%&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Multi-Core Benchmarks&lt;/th&gt;
&lt;th&gt;M2&lt;/th&gt;
&lt;th&gt;M4&lt;/th&gt;
&lt;th&gt;M4 Advantage&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;DKbench 2.9&lt;/td&gt;
&lt;td&gt;7437&lt;/td&gt;
&lt;td&gt;11084&lt;/td&gt;
&lt;td&gt;49.0%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Geekbench 5&lt;/td&gt;
&lt;td&gt;9006&lt;/td&gt;
&lt;td&gt;12989&lt;/td&gt;
&lt;td&gt;44.2%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Cinebench 2024&lt;/td&gt;
&lt;td&gt;587&lt;/td&gt;
&lt;td&gt;963&lt;/td&gt;
&lt;td&gt;64.1%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;FFmpeg x264&lt;/td&gt;
&lt;td&gt;78.6&lt;/td&gt;
&lt;td&gt;51.2&lt;/td&gt;
&lt;td&gt;53.5%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Decompress XIP&lt;/td&gt;
&lt;td&gt;76.2&lt;/td&gt;
&lt;td&gt;60.5&lt;/td&gt;
&lt;td&gt;26.0%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Xcode Compile&lt;/td&gt;
&lt;td&gt;51.9&lt;/td&gt;
&lt;td&gt;34.3&lt;/td&gt;
&lt;td&gt;51.3%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Xcode Analyze&lt;/td&gt;
&lt;td&gt;50.3&lt;/td&gt;
&lt;td&gt;35.4&lt;/td&gt;
&lt;td&gt;42.1%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Handbrake HEVC&lt;/td&gt;
&lt;td&gt;338&lt;/td&gt;
&lt;td&gt;234&lt;/td&gt;
&lt;td&gt;44.4%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Average&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;46.8%&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;
  
  
  GPU Comparison
&lt;/h2&gt;

&lt;p&gt;I wouldn't say Macs are gaming machines, due to not having support for the majority of games, and no support for dedicated graphics cards for most models. However, the integrated GPU of the Apple Silicon processors is quite powerful, so I still wanted to see how much improvement there was:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl5m2ovo1mii93026zuic.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl5m2ovo1mii93026zuic.png" alt="Image description" width="600" height="371"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The Unigine Heaven benchmark is quite old, but it does show there was a fair increase in performance (over 20%) - I had a similar result in Civilization VI which I did not include above. The gains in the Compute benchmark are a bit higher. In any case you would not really want to upgrade to the M4 just for the graphics, but it's still good to see an improvement.&lt;/p&gt;

&lt;h2&gt;
  
  
  SSD Speed
&lt;/h2&gt;

&lt;p&gt;If you recall, going from the M1 to the M2 on the base (256GB) SSD configuration Apple reduced the number of NAND chips from 2 to 1 effectively halving the read speed, a decision that caused quite a stir. Let's see how the M4 base config does:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyqeuaip866pl4mkmch2k.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyqeuaip866pl4mkmch2k.png" alt="Image description" width="600" height="371"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;It's good news, we are back at good levels of performance, finally like M1 or better. Yes, basically matching the original M1 Mac Mini from 5 years ago is considered good news and I know how that sounds strange! They could at least have bumped up the minimum configuration, but I'll take the RAM increase any day! &lt;/p&gt;

&lt;h2&gt;
  
  
  Form factor, power &amp;amp; cooling
&lt;/h2&gt;

&lt;p&gt;The old Mac Mini was small, the new one is tiny. A couple of USB-A are sacrificed, but we get 5 total USB-C/Thunderbolt, up from 2. It might be more future-proof, but it does mean you can't get away without hubs/adapters for those USB-A devices you still have. At least they kept ethernet (with a 10Gb option too), 3.5mm audio and HDMI (bumped up to v2.1 from 2.0) ports.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffpddkz4ul5vr4tkn8ejo.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffpddkz4ul5vr4tkn8ejo.jpg" alt="Image description" width="800" height="437"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The much smaller size is matched with a &lt;a href="https://support.apple.com/en-gb/103253" rel="noopener noreferrer"&gt;hotter, more power consuming&lt;/a&gt; CPU:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;&lt;/th&gt;
&lt;th&gt;M2&lt;/th&gt;
&lt;th&gt;M4&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Idle Power (W)&lt;/td&gt;
&lt;td&gt;7&lt;/td&gt;
&lt;td&gt;4&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Max Power (W)&lt;/td&gt;
&lt;td&gt;50&lt;/td&gt;
&lt;td&gt;65&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Max Thermal Out (BTU/h)&lt;/td&gt;
&lt;td&gt;171&lt;/td&gt;
&lt;td&gt;222&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;This results in the need for powerful fan(s) to dissipate this heat. As noted above, when running DKbench on 10 cores, I could see my core temperatures quickly reach 100C, at which point the fan would engage at around 3000 rpm, quickly bringing the temp down to 70-80C. This took me by surprise as it is quite noticeable - I had never heard any Mac Mini (having both M1 &amp;amp; M2) make any perceivable sound before. At least it is an air/suction sound, not any annoying whirring etc. It is not a huge deal - if I ran DKbench at 6 threads it would not engage the fans and STILL be over 10% faster than the M2, but I have no idea why they chose to go this way. I am pretty sure no sane Mac Mini user would say "oh, it's too big - make it smaller, even if it's louder".&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;It's clear that the M4 Mac Mini is a substantial upgrade over the M2. The newer CPU alone offers significant performance improvements across the board, especially for heavy tasks - a developer or content creator will actually notice differences in performing daily tasks like compiling, rendering etc. Moreover, if we are looking specifically at the low cost base config models, twice the RAM and faster SSD with the M4 is a further advantage.&lt;/p&gt;

&lt;p&gt;However, I can't help but notice a worrying trend. When the M1 launched, it was a game-changer, offering twice the performance of x86 chips at a fraction of the power consumption, with near-silent cooling. Since then, Apple has kept pushing performance higher—but at the cost of increasing power draw. The Mac Mini's power ceiling has jumped from 39W (M1) to 50W (M2) to 65W (M4). At the same time, the chassis has gotten smaller, making cooling more difficult.&lt;/p&gt;

&lt;p&gt;This reminds me of the previous decade, when Intel was releasing power-hungry CPUs to boost performance, while Apple made laptops thinner and thinner. The result? 2019 MacBook Pros that throttled under load and had fans constantly running, especially in warm environments. We're obviously not there yet, but hearing the M4 Mini's fan spin up under heavy load makes me wonder if Apple is starting down a similar path.&lt;/p&gt;

&lt;p&gt;For now, though, the M4 remains an excellent upgrade — but I hope future Apple Silicon devices continue to prioritize the efficiency and silent operation that made the M1 such a game-changer.&lt;/p&gt;

</description>
      <category>mac</category>
      <category>apple</category>
      <category>performance</category>
      <category>hardware</category>
    </item>
    <item>
      <title>Google Axion: A New Leader in ARM Server Performance</title>
      <dc:creator>Dimitrios Kechagias</dc:creator>
      <pubDate>Sun, 08 Dec 2024 16:06:16 +0000</pubDate>
      <link>https://forem.com/dkechag/google-axion-a-new-leader-in-arm-server-performance-4im9</link>
      <guid>https://forem.com/dkechag/google-axion-a-new-leader-in-arm-server-performance-4im9</guid>
      <description>&lt;p&gt;Although our current cloud deployment at &lt;a href="//www.spareroom.co.uk"&gt;SpareRoom&lt;/a&gt; is x86, I’ve had the opportunity to test Google’s first in-house ARM server CPU "&lt;strong&gt;Axion&lt;/strong&gt;" for a few months before its recent public release. Without giving too much away upfront - let's just say I was not left unimpressed. I’ll let the numbers, presented as charts, show how &lt;strong&gt;Axion&lt;/strong&gt; compares in both performance and value with the best offerings in Google and Amazon clouds.&lt;/p&gt;

&lt;p&gt;Table of Contents:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The Contenders&lt;/li&gt;
&lt;li&gt;Test setup&lt;/li&gt;
&lt;li&gt;
Performance Results

&lt;ul&gt;
&lt;li&gt;Benchmark suite results&lt;/li&gt;
&lt;li&gt;Compilation&lt;/li&gt;
&lt;li&gt;7zip performance&lt;/li&gt;
&lt;li&gt;Video compression&lt;/li&gt;
&lt;li&gt;OpenSSL (AVX-512)&lt;/li&gt;
&lt;li&gt;Summary: performance delta vs Genoa &amp;amp; Graviton4&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

Performance / Price

&lt;ul&gt;
&lt;li&gt;On Demand &amp;amp; Reserved&lt;/li&gt;
&lt;li&gt;Spot Instances&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;Conclusion&lt;/li&gt;

&lt;/ul&gt;

&lt;h2&gt;
  
  
  The Contenders
&lt;/h2&gt;

&lt;p&gt;Here are the contenders for this comparison, the best/most relevant drawn from &lt;a href="https://dev.to/dkechag/cloud-provider-comparison-2024-vm-performance-price-3h4l"&gt;my recent Cloud VM Comparison test&lt;/a&gt;, with prices updated for the 3rd week of November 2024:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Instance Type&lt;/th&gt;
&lt;th&gt;CPU type&lt;/th&gt;
&lt;th&gt;HT / SMT&lt;/th&gt;
&lt;th&gt;Price* $/Month&lt;/th&gt;
&lt;th&gt;1Y Res.* $/Month&lt;/th&gt;
&lt;th&gt;3Y Res.* $/Month&lt;/th&gt;
&lt;th&gt;Spot* $/Month&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Amazon C7a&lt;/td&gt;
&lt;td&gt;AMD EPYC Genoa&lt;/td&gt;
&lt;td&gt;-&lt;/td&gt;
&lt;td&gt;77.36&lt;/td&gt;
&lt;td&gt;51.97&lt;/td&gt;
&lt;td&gt;35.39&lt;/td&gt;
&lt;td&gt;26.20&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Google c4a&lt;/td&gt;
&lt;td&gt;Google Axion&lt;/td&gt;
&lt;td&gt;-&lt;/td&gt;
&lt;td&gt;57.69&lt;/td&gt;
&lt;td&gt;38.89&lt;/td&gt;
&lt;td&gt;27.29&lt;/td&gt;
&lt;td&gt;24.50&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Amazon C8g&lt;/td&gt;
&lt;td&gt;AWS Graviton4&lt;/td&gt;
&lt;td&gt;-&lt;/td&gt;
&lt;td&gt;66.45&lt;/td&gt;
&lt;td&gt;44.60&lt;/td&gt;
&lt;td&gt;28.75&lt;/td&gt;
&lt;td&gt;10.80&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Google c4&lt;/td&gt;
&lt;td&gt;Intel Emerald Rapids&lt;/td&gt;
&lt;td&gt;Y&lt;/td&gt;
&lt;td&gt;64.49&lt;/td&gt;
&lt;td&gt;41.52&lt;/td&gt;
&lt;td&gt;30.34&lt;/td&gt;
&lt;td&gt;27.23&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Google t2d&lt;/td&gt;
&lt;td&gt;AMD EPYC Milan&lt;/td&gt;
&lt;td&gt;-&lt;/td&gt;
&lt;td&gt;64.68&lt;/td&gt;
&lt;td&gt;41.86&lt;/td&gt;
&lt;td&gt;30.76&lt;/td&gt;
&lt;td&gt;11.77&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Google c3d&lt;/td&gt;
&lt;td&gt;AMD EPYC Genoa&lt;/td&gt;
&lt;td&gt;Y&lt;/td&gt;
&lt;td&gt;57.12&lt;/td&gt;
&lt;td&gt;36.87&lt;/td&gt;
&lt;td&gt;27.02&lt;/td&gt;
&lt;td&gt;24.28&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;&lt;sup&gt;* &lt;em&gt;Monthly price for 2x vCPU / 4GB RAM / 30GB disk instance, except t2d with 8GB RAM. For c3d, 4x vCPU is the minimum so price extrapolated to 2x vCPU.&lt;/em&gt;&lt;/sup&gt;&lt;/p&gt;

&lt;p&gt;Amazon had the fastest ARM server VMs as we saw, featuring the &lt;strong&gt;Graviton4&lt;/strong&gt;, and I am including the new compute-optimized type &lt;strong&gt;C8g&lt;/strong&gt;. On the x86 front, I selected their &lt;strong&gt;C7a&lt;/strong&gt;, featuring non-SMT AMD &lt;strong&gt;Genoa&lt;/strong&gt;, which remains the fastest x86 VM in terms of per-vCPU performance.&lt;br&gt;
From Google, I compared against three x86 types: the SMT/HT-enabled AMD &lt;strong&gt;Genoa&lt;/strong&gt; (&lt;strong&gt;c3d&lt;/strong&gt;) and Intel &lt;strong&gt;Emerald Rapids&lt;/strong&gt; (&lt;strong&gt;c4&lt;/strong&gt;), and the older AMD &lt;strong&gt;Milan&lt;/strong&gt; (&lt;strong&gt;t2d&lt;/strong&gt;). The &lt;strong&gt;t2d&lt;/strong&gt;, while older, remains competitive in per-vCPU metrics due to its lack of SMT.&lt;/p&gt;
&lt;h2&gt;
  
  
  Test setup
&lt;/h2&gt;

&lt;p&gt;I used the same methodology as my recent cloud comparison test, as it was detailed &lt;a href="https://dev.to/dkechag/cloud-provider-comparison-2024-vm-performance-price-3h4l#test-setup-amp-benchmarking-methodology"&gt;here&lt;/a&gt; with one addition: a real-world FFmpeg video compression benchmark.&lt;/p&gt;

&lt;p&gt;Here’s how I set it up:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# For ARM instances - replace 'arm64' with 'amd64' for x86:

wget https://johnvansickle.com/ffmpeg/releases/ffmpeg-release-arm64-static.tar.xz
tar -xJf ffmpeg-release-arm64-static.tar.xz --wildcards --no-anchored 'ffmpeg' -O &amp;gt; /usr/bin/ffmpeg
chmod +x /usr/bin/ffmpeg
wget https://download.blender.org/peach/bigbuckbunny_movies/big_buck_bunny_720p_h264.mov
time ffmpeg -i big_buck_bunny_720p_h264.mov -c:v libx264 -threads 1 out264a.mp4
time ffmpeg -i big_buck_bunny_720p_h264.mov -c:v libx264 -threads 2 out264b.mp4
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Performance Results
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Benchmark suite results
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://metacpan.org/pod/Benchmark::DKbench" rel="noopener noreferrer"&gt;DKbench&lt;/a&gt; is probably the most telling benchmark for the general workloads we use our servers with, and Geekbench 5 is added because of the wide availability of comparison results:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbjgty7fl93nbrcbdp88c.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbjgty7fl93nbrcbdp88c.png" alt="Image description" width="800" height="495"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Immediately we see that &lt;strong&gt;Axion&lt;/strong&gt; not only edges out &lt;strong&gt;Graviton4&lt;/strong&gt;, it’s surprisingly close to the &lt;strong&gt;C7a&lt;/strong&gt;, the fastest x86 instance overall.&lt;/p&gt;

&lt;p&gt;DKbench is a good indicator of general performance, but we'll get on with some more specialised testing.&lt;/p&gt;

&lt;h3&gt;
  
  
  Compilation
&lt;/h3&gt;

&lt;p&gt;For a developer-specific workload, I compiled Perl on two threads:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzkk9zgjxcm7s5vvv43tz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzkk9zgjxcm7s5vvv43tz.png" alt="Image description" width="800" height="496"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Axion&lt;/strong&gt; came second only to Amazon's &lt;strong&gt;Genoa&lt;/strong&gt;, and the difference was marginal.&lt;/p&gt;

&lt;h3&gt;
  
  
  7zip performance
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frti07bd61ouplx8enqol.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frti07bd61ouplx8enqol.png" alt="Image description" width="800" height="477"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This is the most impressive showing for &lt;strong&gt;Axion&lt;/strong&gt;, leading in both compression and decompression.&lt;/p&gt;

&lt;h3&gt;
  
  
  Video compression
&lt;/h3&gt;

&lt;p&gt;As mentioned above, I am transcoding using FFmpeg/lib264. lib264 is a very mature library and that should be well-optimized for Intel/AMD, so I was interested to see how well the new Google CPu could do:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7ktqnzoy0v5sgj073jkk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7ktqnzoy0v5sgj073jkk.png" alt="Image description" width="800" height="478"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And the answer is, not bad at all. It falls behind of &lt;strong&gt;Emerald Rapids&lt;/strong&gt; &amp;amp; &lt;strong&gt;Genoa&lt;/strong&gt; per single thread, but not by a significant margin. And given that for video compression we don't really care about single-threaded runs, only the &lt;strong&gt;C7a&lt;/strong&gt; is actually faster per vCPU. &lt;/p&gt;

&lt;h3&gt;
  
  
  OpenSSL (AVX-512)
&lt;/h3&gt;

&lt;p&gt;Moving on to an even more heavily-optimized for Intel/AMD CPUs benchmark, OpenSSL can use the latest AVX-512 instructions for increased performance. This is basically the worst case usage scenario for ARM as the architecture has much more limited SIMD extensions (NEON):&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frfrve8s37xnol3io3nq9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frfrve8s37xnol3io3nq9.png" alt="Image description" width="800" height="496"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Here, &lt;strong&gt;Axion&lt;/strong&gt; improves over &lt;strong&gt;Graviton4&lt;/strong&gt; as in all tests, but cannot keep up even with the older x86.&lt;/p&gt;

&lt;h3&gt;
  
  
  Summary: performance delta vs Genoa &amp;amp; Graviton4
&lt;/h3&gt;

&lt;p&gt;Let's have a look at the performance delta of the &lt;strong&gt;Axion&lt;/strong&gt; vs &lt;strong&gt;Genoa&lt;/strong&gt; (in purple) and &lt;strong&gt;Graviton4&lt;/strong&gt; (in yellow) for all the benchmarks we ran (skipping the special case of OpenSSL):&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fblpa0o6jfg45b20eiq2a.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fblpa0o6jfg45b20eiq2a.jpg" alt="Image description" width="800" height="496"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;There are consistent gains over the &lt;strong&gt;Graviton4&lt;/strong&gt; (from 3 to 15%). On the other hand, &lt;strong&gt;Genoa&lt;/strong&gt; maintains the lead for most tests, with the maximum difference at 15%, but &lt;strong&gt;Axion&lt;/strong&gt; keeps much closer than that in general and even bests &lt;strong&gt;Genoa&lt;/strong&gt; in some cases. I would say that, for most uses, &lt;strong&gt;Axion&lt;/strong&gt; will be closer to &lt;strong&gt;Genoa&lt;/strong&gt; than it is to &lt;strong&gt;Graviton4&lt;/strong&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Performance / Price
&lt;/h2&gt;

&lt;p&gt;The main reason Amazon &amp;amp; Google developed their own ARM solutions is to provide the best possible value (for themselves and their customers). Hence, a look at performance / price is possibly even more useful than raw performance. I will be looking at multicore performance with DKbench, as with it's varied benchmarks gave reasonably balanced performance results.&lt;/p&gt;

&lt;h3&gt;
  
  
  On Demand &amp;amp; Reserved
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgw3ybgl94o28kesh0a0o.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgw3ybgl94o28kesh0a0o.png" alt="Image description" width="800" height="499"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Looks like it's mission accomplished for Google. &lt;strong&gt;Axion&lt;/strong&gt; is by far the best value amongst the tested VMs, both for On demand and 1y/3y reserved pricing.&lt;/p&gt;

&lt;h3&gt;
  
  
  Spot Instances
&lt;/h3&gt;

&lt;p&gt;Spot prices vary wildly, both with time and location. Based on Eastern-US pricing on the 3rd week of November when I was compiling the results, this is what we see:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fush0wojfv4r8t86bbzn9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fush0wojfv4r8t86bbzn9.png" alt="Image description" width="800" height="499"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I don't know if Amazon is doing this on purpose, but they are giving &lt;strong&gt;Graviton4&lt;/strong&gt; at an unbeatable spot price for US-east, where &lt;strong&gt;Axion&lt;/strong&gt; has availability, when it is priced almost 2x in US-west, where &lt;strong&gt;Graviton4&lt;/strong&gt; at an unbeatable spot price for US-east is not yet available! In any case, for good value on &lt;strong&gt;Axion&lt;/strong&gt; spot instances you'll have to wait for wider availability, right now on Google you'd have to go with the &lt;strong&gt;Milan&lt;/strong&gt; Tau instances if you wanted the best value.&lt;/p&gt;

&lt;p&gt;This chart is mostly to make you research spot prices, as there are always great deals to be found, especially if you are not limited to a specific region. The deals change often, so try to keep track of pricing.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Google’s &lt;strong&gt;Axion&lt;/strong&gt; CPU proves to be an exceptional contender in the ARM server space, offering stellar performance and value. Expect an almost 10% performance improvement over &lt;strong&gt;Graviton4&lt;/strong&gt;. In addition, while it trails behind x86 CPUs in some specialized workloads (e.g. AVX-512), it is not far behind the best x86 CPUs in the majority of tasks, posing as a viable alternative for those seeking to switch to ARM but keep top-tier performance levels.&lt;/p&gt;

</description>
      <category>googlecloud</category>
      <category>arm</category>
      <category>aws</category>
      <category>gcp</category>
    </item>
  </channel>
</rss>
