<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Dzhuneyt</title>
    <description>The latest articles on Forem by Dzhuneyt (@dzhuneyt).</description>
    <link>https://forem.com/dzhuneyt</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/dzhuneyt"/>
    <language>en</language>
    <item>
      <title>5 things to know about Claude Opus 4.7</title>
      <dc:creator>Dzhuneyt</dc:creator>
      <pubDate>Thu, 16 Apr 2026 17:36:36 +0000</pubDate>
      <link>https://forem.com/dzhuneyt/5-things-to-know-about-claude-opus-47-50p6</link>
      <guid>https://forem.com/dzhuneyt/5-things-to-know-about-claude-opus-47-50p6</guid>
      <description>&lt;p&gt;Five months after Opus 4.6, Anthropic shipped Opus 4.7 today. Point releases are becoming the norm. Here's what actually changed.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Opus 4.7 is the flagship. Mythos is the frontier.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Opus 4.7 is Anthropic's new flagship for general availability. It shipped on day one to Claude.ai, the API, Claude Code, Amazon Bedrock, Google Vertex AI, Microsoft Foundry, and GitHub Copilot — at the same $5/$25 per million input/output tokens as 4.6 (though a new tokenizer can inflate effective cost up to 35%). Above it sits Mythos, a frontier model Anthropic announced earlier this month and gated to roughly 40 vetted organizations through Project Glasswing. Mythos is not for sale. Opus 4.7 is the model you can actually use.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Coding is the headline.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Almost all the published gains concentrate in software engineering:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;SWE-bench Pro: 53.4 → 64.2 (ahead of GPT-5.4 at 57.7 and Gemini 3.1 Pro at 54.2)&lt;/li&gt;
&lt;li&gt;SWE-bench Verified: 80.8 → 87.6&lt;/li&gt;
&lt;li&gt;Cursor internal benchmark: 58% → 70%&lt;/li&gt;
&lt;li&gt;Rakuten-SWE-Bench: 3× more production tasks resolved vs 4.6&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Enterprise numbers from Anthropic's launch partners: Databricks reports 21% fewer errors on document reasoning. Replit says "same quality at lower cost." Box reports 56% fewer model calls and 50% fewer tool calls.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. The behavior changed, not just the score.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This is the part worth noting if you run prompts or agents today. Opus 4.7 reasons more, reaches for tools less, follows instructions more literally, and checks its own work before reporting back. It also writes in a more direct tone and adapts response length to the task.&lt;/p&gt;

&lt;p&gt;If you've hard-coded scaffolding into prompts ("summarize every N steps," verbosity constraints, "use this tool first"), some of it is now redundant or counter-productive. Anthropic recommends a new &lt;code&gt;xhigh&lt;/code&gt; effort level as the default for coding and agentic work.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. Vision got a serious upgrade.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Opus 4.7 can process images up to ~3.75 megapixels — roughly 3× the resolution of prior Opus models. For computer-use agents, there's also a practical detail: pointing and bounding-box coordinates are now 1:1 with actual pixels, so there's no scale-factor math before clicking or highlighting something. Small on paper, meaningful for anyone building UI automation.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;5. Safety got a new layer.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Opus 4.7 is the first production model shipping with Project Glasswing — runtime safeguards that detect and block prohibited cybersecurity requests like exploitation or credential hunting. Anthropic also deliberately trained its cyber capabilities down, keeping them below Mythos. Legitimate security researchers can apply for a verification program to get reduced restrictions.&lt;/p&gt;

&lt;p&gt;The framing from Anthropic: 4.7 is the testbed for safeguards that eventually let them broaden access to Mythos-class models.&lt;/p&gt;

&lt;p&gt;Have you tried Opus 4.7 already? How does it compare to other models, in your opinion?&lt;/p&gt;

</description>
      <category>ai</category>
      <category>claude</category>
      <category>chatgpt</category>
      <category>openai</category>
    </item>
    <item>
      <title>AWS S3 Files: The Missing Conversation</title>
      <dc:creator>Dzhuneyt</dc:creator>
      <pubDate>Fri, 10 Apr 2026 00:41:28 +0000</pubDate>
      <link>https://forem.com/dzhuneyt/aws-s3-files-the-missing-conversation-3jn1</link>
      <guid>https://forem.com/dzhuneyt/aws-s3-files-the-missing-conversation-3jn1</guid>
      <description>&lt;p&gt;Amazon S3 Files launched on April 7, and the reaction was immediate — nearly universal excitement across the AWS engineering community. And honestly, the excitement is warranted.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.linkedin.com/in/andywarfield/" rel="noopener noreferrer"&gt;Andrew Warfield&lt;/a&gt;, VP and Distinguished Engineer at Amazon, framed the vision clearly: &lt;em&gt;"With Tables, Vectors, and now Files, we are consciously changing the surface of S3. It's not just objects — it's evolving to make sure you can work with your data however you need to."&lt;/em&gt; That's not a minor product update. That's a platform shift.&lt;/p&gt;

&lt;p&gt;But as I scrolled through dozens of posts and articles, something stood out: almost nobody had actually tested it. The conversation was full of explainers and excitement, light on hands-on findings. So I spun up a throwaway AWS account, provisioned the infrastructure, and ran it through real file operations. Here's what the conversation is missing.&lt;/p&gt;




&lt;h2&gt;
  
  
  What S3 Files Actually Is
&lt;/h2&gt;

&lt;p&gt;The short version: S3 Files adds an NFS 4.1/4.2 file system interface on top of your S3 buckets. You mount a bucket, and your tools — &lt;code&gt;cat&lt;/code&gt;, &lt;code&gt;ls&lt;/code&gt;, &lt;code&gt;python open()&lt;/code&gt;, anything that speaks POSIX — can read and write to S3 data directly. Under the hood, it's built on Amazon EFS technology. Your data stays in S3. Actively used files get cached in a high-performance storage layer for low-latency access.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.linkedin.com/in/donkersgoed/" rel="noopener noreferrer"&gt;Luc van Donkersgoed&lt;/a&gt;, Principal Engineer at PostNL, raised the right question: &lt;em&gt;"Does this finally settle the question 'is it a file or is it an object'?"&lt;/em&gt; The answer, as far as I can tell: it's both now, simultaneously, and that's the whole point.&lt;/p&gt;

&lt;p&gt;S3 Files is the third leg of a deliberate strategy. S3 Tables gave you structured query access. S3 Vectors gave you embedding storage. S3 Files gives you POSIX. The pattern is clear — S3 is becoming a universal data substrate, not just an object store.&lt;/p&gt;

&lt;p&gt;Understanding what it is on paper is straightforward. The interesting part is what happens when you actually use it.&lt;/p&gt;




&lt;h2&gt;
  
  
  What I Found When I Tested It
&lt;/h2&gt;

&lt;h3&gt;
  
  
  The EFS DNA Is More Than Cosmetic
&lt;/h3&gt;

&lt;p&gt;The first thing you'll notice is that S3 Files doesn't just borrow from EFS conceptually — it's deeply woven into the EFS ecosystem.&lt;/p&gt;

&lt;p&gt;The mount helper binary (&lt;code&gt;mount.s3files&lt;/code&gt;) ships inside the &lt;code&gt;amazon-efs-utils&lt;/code&gt; package. There's no separate &lt;code&gt;amazon-s3-files-utils&lt;/code&gt;. The IAM trust principal you need in your role policy is &lt;code&gt;elasticfilesystem.amazonaws.com&lt;/code&gt; — nothing with "s3files" in the name. When you inspect the mount, it shows up as &lt;code&gt;127.0.0.1:/ type nfs4&lt;/code&gt; — the same local NFS proxy pattern that EFS uses.&lt;/p&gt;

&lt;p&gt;This has practical implications. If you've operated EFS before, your mental model transfers directly: mount targets, security groups on port 2049, access points, the client permission model (&lt;code&gt;elasticfilesystem:ClientMount&lt;/code&gt;, &lt;code&gt;ClientWrite&lt;/code&gt;, &lt;code&gt;ClientRootAccess&lt;/code&gt;). If you haven't, expect a learning curve that the launch announcement doesn't prepare you for.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Performance Story Has Two Chapters
&lt;/h3&gt;

&lt;p&gt;The headline numbers in the announcement — "multiple terabytes per second of aggregate read throughput" — are real, but they describe one specific scenario. The full picture is more nuanced.&lt;/p&gt;

&lt;p&gt;My setup was minimal — a single &lt;code&gt;t4g.micro&lt;/code&gt; ARM instance in &lt;code&gt;us-east-1&lt;/code&gt;, mounting the file system over NFS in a single availability zone. Not a performance-optimized configuration by any means, and throughput on larger instances or multi-AZ setups would differ. But it gives you a baseline.&lt;/p&gt;

&lt;p&gt;The first (cold) read of a 5MB file completed at &lt;strong&gt;17-18 MB/s&lt;/strong&gt; — respectable, but not the headline number. The second read of the same file: &lt;strong&gt;up to 3.1 GB/s&lt;/strong&gt; across multiple runs — over 100x faster. The intelligent caching layer had kicked in.&lt;/p&gt;

&lt;p&gt;Small file reads were consistently in the &lt;strong&gt;2-6 millisecond&lt;/strong&gt; range. Writing 100 small files took under a second.&lt;/p&gt;

&lt;p&gt;The key mechanic here is the 128KB threshold (configurable). Files at or below this size get loaded into the high-performance storage layer with sub-millisecond to single-digit millisecond latencies. Files above it are streamed directly from S3 — fast in aggregate, but not cached.&lt;/p&gt;

&lt;p&gt;This means performance depends entirely on your access pattern and whether the cache is warm. Repeated reads of working-set data will be blazing fast. One-off reads of large cold files will feel like S3 with extra steps. Neither is wrong — but the headline doesn't distinguish between them.&lt;/p&gt;

&lt;h3&gt;
  
  
  File System Semantics: Mostly Complete, With Gaps
&lt;/h3&gt;

&lt;p&gt;I tested the operations you'd actually run in production:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Operation&lt;/th&gt;
&lt;th&gt;Result&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Write, append&lt;/td&gt;
&lt;td&gt;Works&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Rename (mv)&lt;/td&gt;
&lt;td&gt;Works — atomic&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;mkdir -p&lt;/td&gt;
&lt;td&gt;Works&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;chmod&lt;/td&gt;
&lt;td&gt;Works — POSIX permissions preserved&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Symlinks&lt;/td&gt;
&lt;td&gt;Works&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;File locking (flock)&lt;/td&gt;
&lt;td&gt;Works&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Hard links&lt;/td&gt;
&lt;td&gt;
&lt;strong&gt;Does not work&lt;/strong&gt; — "Too many links" error&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Read-after-write consistency held in every test I ran. I wrote a file and immediately read it back — no stale data, no lag. This is explicitly documented as a guarantee, and my testing confirmed it.&lt;/p&gt;

&lt;p&gt;A few less obvious details: symlinks do work at the filesystem level, but when they sync to S3, they become regular objects containing the target path — not true symlinks. There's also a hidden &lt;code&gt;.s3files-lost+found-&amp;lt;fs-id&amp;gt;&lt;/code&gt; directory that appears at the mount root.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Sync Story Is Asymmetric
&lt;/h3&gt;

&lt;p&gt;S3 Files syncs bidirectionally between the filesystem and the S3 bucket. But the two directions behave very differently.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Filesystem → S3&lt;/strong&gt; is fast. A single file written through the mount appeared as an S3 object within &lt;strong&gt;1-2 seconds&lt;/strong&gt;. A batch of 5 files written in quick succession took about &lt;strong&gt;a minute&lt;/strong&gt; to fully sync — suggesting some batching or queuing under the hood, though a larger sample would be needed to characterize this precisely.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;S3 → Filesystem&lt;/strong&gt; is slower. A file uploaded directly to S3 via the API took roughly &lt;strong&gt;45 seconds&lt;/strong&gt; to appear on the mounted filesystem. This is the direction that matters if you have external processes writing to S3 and expect the mounted view to reflect those changes promptly.&lt;/p&gt;

&lt;p&gt;Both directions are automatic and require no configuration. But if your architecture relies on S3 event notifications firing immediately after a filesystem write, or on the mounted view reflecting S3 uploads in near-real-time, you need to account for these delays.&lt;/p&gt;

&lt;h3&gt;
  
  
  Container Integration Is Seamless
&lt;/h3&gt;

&lt;p&gt;One finding that pleasantly surprised me: Docker containers can access the S3 Files mount via a simple bind mount. No FUSE configuration, no special container capabilities, no sidecar.&lt;/p&gt;

&lt;p&gt;I tested Alpine containers reading S3 data, writing files back, and a Python container running &lt;code&gt;json.load()&lt;/code&gt; directly on a JSON file stored in S3 — all through a straightforward &lt;code&gt;-v /mnt/s3files:/data&lt;/code&gt; bind mount. Writes from inside the container were immediately visible on the host and synced to S3 within seconds.&lt;/p&gt;

&lt;p&gt;This is a meaningful practical advantage over Mountpoint for S3, which requires FUSE support inside the container. With S3 Files, the NFS mount on the host is transparent to the container — it just sees a normal directory.&lt;/p&gt;




&lt;h2&gt;
  
  
  What's Not Ready Yet
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Versioning is mandatory.&lt;/strong&gt; Your S3 bucket must have versioning enabled before you can create a file system on it. The announcement doesn't mention this. I discovered it when my &lt;code&gt;create-file-system&lt;/code&gt; call failed with a validation error. For large, high-churn buckets, mandatory versioning has real cost and lifecycle implications that deserve consideration.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;No Infrastructure as Code support at launch.&lt;/strong&gt; There's no CloudFormation resource type, no CDK construct, no Terraform resource. This will almost certainly come soon, but if you're evaluating today, you're working with the CLI, the SDK, or rolling your own custom resources. Something to keep in mind, not a dealbreaker.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The IAM setup is non-obvious.&lt;/strong&gt; The trust policy uses EFS service principals (&lt;code&gt;elasticfilesystem.amazonaws.com&lt;/code&gt;) with S3 Files-specific conditions (&lt;code&gt;arn:aws:s3files:REGION:ACCOUNT:file-system/*&lt;/code&gt;). Get this wrong, and the file system silently enters an "error" state — there's no clear failure at creation time. It reports "creating" and then, minutes later, quietly flips to "error" with an access denied message. This cost me real debugging time, and it's the kind of operational sharp edge worth knowing about upfront.&lt;/p&gt;




&lt;h2&gt;
  
  
  Where This Actually Fits
&lt;/h2&gt;

&lt;p&gt;To ground this, here's how S3 Files compares to the alternatives:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;&lt;/th&gt;
&lt;th&gt;Mountpoint for S3&lt;/th&gt;
&lt;th&gt;S3 Files&lt;/th&gt;
&lt;th&gt;EFS&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Write support&lt;/td&gt;
&lt;td&gt;Append-only&lt;/td&gt;
&lt;td&gt;Full&lt;/td&gt;
&lt;td&gt;Full&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Caching&lt;/td&gt;
&lt;td&gt;None&lt;/td&gt;
&lt;td&gt;Intelligent (128KB threshold)&lt;/td&gt;
&lt;td&gt;Full&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Protocol&lt;/td&gt;
&lt;td&gt;FUSE&lt;/td&gt;
&lt;td&gt;NFS 4.1/4.2&lt;/td&gt;
&lt;td&gt;NFS 4.1&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;File locking&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Data location&lt;/td&gt;
&lt;td&gt;S3&lt;/td&gt;
&lt;td&gt;S3&lt;/td&gt;
&lt;td&gt;EFS storage&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Container support&lt;/td&gt;
&lt;td&gt;Needs FUSE in container&lt;/td&gt;
&lt;td&gt;Bind mount&lt;/td&gt;
&lt;td&gt;Bind mount&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;IaC support&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;td&gt;Not yet&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;The bigger picture is the one &lt;a href="https://www.linkedin.com/in/andywarfield/" rel="noopener noreferrer"&gt;Warfield&lt;/a&gt; articulated. S3 is systematically absorbing the interfaces that used to require separate services. The file-versus-object debate may genuinely be winding down — not because one paradigm won, but because the boundary between them is dissolving.&lt;/p&gt;

&lt;p&gt;The product is impressive, and the strategic direction is clear. But the operational tooling — IaC, error reporting, observability — hasn't caught up with the feature yet. If you're evaluating S3 Files today, it's worth testing, worth understanding, and worth giving a bit of time to mature before putting it on a critical path.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>s3</category>
    </item>
    <item>
      <title>Your article is well-written. But is it yours?</title>
      <dc:creator>Dzhuneyt</dc:creator>
      <pubDate>Wed, 08 Apr 2026 09:45:42 +0000</pubDate>
      <link>https://forem.com/dzhuneyt/your-article-is-well-written-but-is-it-yours-59kh</link>
      <guid>https://forem.com/dzhuneyt/your-article-is-well-written-but-is-it-yours-59kh</guid>
      <description>&lt;p&gt;Most technical writing used to come from one of two places - direct experience or serious research. Either way, you had to process the material yourself before you could write about it.&lt;/p&gt;

&lt;p&gt;Now? AI can handle all of it:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The research&lt;/li&gt;
&lt;li&gt;The structure&lt;/li&gt;
&lt;li&gt;The writing&lt;/li&gt;
&lt;li&gt;The polish&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Using it for writing and polish? Totally fine. As a non-native English speaker with an engineering brain, I'd rather be solving problems in code. AI helps me take what I actually know and present it clearly. That's a genuine win.&lt;/p&gt;

&lt;p&gt;The problem starts when AI is also generating the knowledge. When you ask it to write about a topic you haven't worked with, haven't debugged at 2am, haven't formed an opinion on through experience - you end up with something that reads well and says nothing.&lt;/p&gt;

&lt;p&gt;I'm glad I started my career before this shift. I got to build the habit of sitting with a problem, forming my own understanding, and then writing about it. That sequence matters - and it's the first thing people skip now that they don't have to do it.&lt;/p&gt;

&lt;p&gt;As engineers, the value we bring isn't in assembling sentences. It's the hard-won context - the tradeoffs we've hit, the production incidents that changed how we think, the "don't do this" that only comes from having done it. AI can't fake that. But it can produce a convincing imitation that fools everyone except the people who actually know.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>contentwriting</category>
      <category>software</category>
      <category>programming</category>
    </item>
    <item>
      <title>AI subscriptions are subsidized. Here's what happens when that stops.</title>
      <dc:creator>Dzhuneyt</dc:creator>
      <pubDate>Sat, 04 Apr 2026 21:20:21 +0000</pubDate>
      <link>https://forem.com/dzhuneyt/ai-subscriptions-are-subsidized-heres-what-happens-when-that-stops-293f</link>
      <guid>https://forem.com/dzhuneyt/ai-subscriptions-are-subsidized-heres-what-happens-when-that-stops-293f</guid>
      <description>&lt;p&gt;Right now, every time you send a query to ChatGPT, Claude, or Gemini, the company behind it is losing money on you. Not breaking even. Losing money.&lt;/p&gt;

&lt;p&gt;OpenAI spent $1.69 for every dollar of revenue it generated in 2025 and is projecting $25 billion in cash burn this year. Even its $200/month Pro plan - the most expensive consumer AI subscription on the market - loses money on heavy users. Anthropic's gross margins were negative 94% in 2024, and its CEO has said publicly that if growth slips from 10x to 5x per year, the company goes bankrupt. These aren't scrappy startups - OpenAI just closed $122 billion at an $852 billion valuation - but even at that scale, the math is tight.&lt;/p&gt;

&lt;p&gt;We've all seen subsidized tech before. The question that keeps coming up is what happens when &lt;em&gt;this&lt;/em&gt; subsidy stops.&lt;/p&gt;

&lt;p&gt;Here's where I think this goes:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. The blunt approach: higher prices, lower limits.&lt;/strong&gt; The most obvious move. Either your subscription goes up, or your usage limits go down. Both are ways to close the gap between what you pay and what it costs to serve you. It's simple but it's also risky. Push too hard and you lose the users you spent billions acquiring.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Keep the cheap entry price, charge heavy users more.&lt;/strong&gt; Instead of a flat monthly subscription with unlimited usage, companies shift to usage-based pricing. Your base subscription stays affordable, but the moment you start burning through tokens - writing code, doing research, running agents - you start paying per query or per token. The casual user barely notices. The power user gets a €300 bill.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Bring ads into the mix.&lt;/strong&gt; Think about it - most entry-level AI usage today is basically a fancy Google search replacement. And Google search has been full of ads forever, and nobody bats an eye. OpenAI already launched an ads pilot that hit $100M in annualized revenue in under six weeks. If the free and cheap tiers start showing sponsored results, most consumers will shrug. They've been trained to expect it.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. Make it cheaper to run.&lt;/strong&gt; The optimist's answer. Better models that do more with less compute. Custom chips like Google's TPUs or Amazon's Trainium that slash inference costs by an order of magnitude. If the cost per query drops 10x, current pricing might actually sustain itself. In practice, though, efficiency gains keep getting reinvested into bigger, more capable models rather than cheaper ones. Every generation of hardware buys you more intelligence, not lower prices. I'd love this to be the answer, but the track record says otherwise.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;5. Let enterprise customers foot the bill.&lt;/strong&gt; Keep the consumer product cheap, maybe even free, and treat it as a marketing funnel. The real money comes from enterprise contracts at €50-100 per seat per month. Your €20/month subscription isn't the business. It's the demo. 70% of Fortune 100 companies already use Claude. That's where the margin lives.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;6. Smaller, local models chip away at the big players.&lt;/strong&gt; This is the wildcard that these companies don't control. The gap between frontier models and open-weight alternatives has compressed to 6-12 months. Running capable models locally on a laptop is real, not theoretical. If enough people and companies shift to smaller, focused LLMs for everyday tasks, the giants lose market share. Though even in this scenario, the remaining users still cost more to serve than they pay. It's the same problem at a smaller scale.&lt;/p&gt;

&lt;p&gt;Will it be one of these? Probably a mix of several, playing out differently across companies. OpenAI seems to be leaning into ads and enterprise. Anthropic is betting heavily on enterprise. Meta is keeping things free because it makes money from ads elsewhere. Google can subsidize AI from search revenue indefinitely - or at least until search revenue starts declining.&lt;/p&gt;

&lt;p&gt;The one thing I'm fairly sure of: the €20/month all-you-can-eat era won't last forever. Whether that means higher prices, usage caps, or ads in your chat window depends on which company you're using - but the direction is the same across all of them.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>claude</category>
      <category>openai</category>
    </item>
    <item>
      <title>Synology - Container Manager - Run a Docker Compose Project on CRON schedule</title>
      <dc:creator>Dzhuneyt</dc:creator>
      <pubDate>Fri, 22 Mar 2024 13:42:42 +0000</pubDate>
      <link>https://forem.com/dzhuneyt/synology-container-manager-run-a-docker-compose-project-on-cron-schedule-16n4</link>
      <guid>https://forem.com/dzhuneyt/synology-container-manager-run-a-docker-compose-project-on-cron-schedule-16n4</guid>
      <description>&lt;p&gt;I have a Synology NAS at home that I use for various purposes. One of the things I do with it is run a Docker Compose&lt;br&gt;
project that contains a few services.&lt;/p&gt;

&lt;p&gt;I have a few services that I want to run on a schedule. For example, a backup service that&lt;br&gt;
runs every night or a media optimizer container that takes my movie library and converts it to playback formats, that&lt;br&gt;
are more suitable for older smart TVs.&lt;/p&gt;

&lt;p&gt;In this post, I will show you how to run a Docker Compose project on a schedule using the Synology Container Manager and&lt;br&gt;
the Synology Task Scheduler.&lt;/p&gt;
&lt;h3&gt;
  
  
  Prerequisites
&lt;/h3&gt;

&lt;p&gt;You will need to use Google Chrome or another browser that allows you to inspect Network requests.&lt;/p&gt;
&lt;h3&gt;
  
  
  Getting the Docker Compose project ID
&lt;/h3&gt;

&lt;p&gt;Inspecting the Network requests is the only way to retrieve the unique ID that Synology uses to identify the Docker&lt;br&gt;
Compose project.&lt;/p&gt;

&lt;p&gt;We will need this ID later, so let's first explore how to find it.&lt;/p&gt;

&lt;p&gt;Assuming you already created a "Project" inside "Container Manager" (a project is a fancy name that DSM uses for Docker&lt;br&gt;
Compose stacks). Open the "Container Manager" and navigate to the project you want to run on a schedule. If it was&lt;br&gt;
already running, stop it.&lt;/p&gt;

&lt;p&gt;Before starting it again, open the Network inspector of your browser. If you are using Chrome on a Mac, you can do this&lt;br&gt;
via &lt;code&gt;Cmd + Option + I&lt;/code&gt; and then click on the "Network" tab. Other browsers obviously have similar developer tools but&lt;br&gt;
you need to Google a bit how to find it, if you haven't used it before.&lt;/p&gt;

&lt;p&gt;Hit the "Clear network log" button at the left of this floating window if you want to reduce clutter. Finally, start the&lt;br&gt;
project by clicking the "Start" button in the Container Manager and observe the Network requests. You should find one&lt;br&gt;
that looks like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;SYNO.Docker.Project
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Open it, and inside the "Payload" tab, you will find a JSON object that contains the ID of the project. It should be&lt;br&gt;
something long and random looking, for example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;9fb91ca5-d817-42f8-8ddc-2acdf4d94494
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9cbrmngmsy7hceqg1ms6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9cbrmngmsy7hceqg1ms6.png" alt="Image description" width="800" height="453"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Take note of this ID, because you will need it in the next step.&lt;/p&gt;

&lt;h3&gt;
  
  
  Scheduling a CRON job via Synology Task Scheduler
&lt;/h3&gt;

&lt;p&gt;Open the Control Panel of your Synology NAS and navigate to the "Task Scheduler" app. Click on "Create" and then "&lt;br&gt;
Scheduled Task" -&amp;gt; "User-defined script".&lt;/p&gt;

&lt;p&gt;Name your task based on your preferences, e.g. "Run Backup service on schedule".&lt;/p&gt;

&lt;p&gt;In the "User" dropdown, select "root"; otherwise Synology will not have the necessary permissions to run the Docker&lt;br&gt;
Compose project.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffrz1yfa5xjqqhr56jruf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffrz1yfa5xjqqhr56jruf.png" alt="Image description" width="800" height="876"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In the "Schedule" tab pick the schedule that suits your needs.&lt;/p&gt;

&lt;p&gt;In the "Task Settings" tab, inside the "User-defined script" text box, paste the following script:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;synowebapi &lt;span class="nt"&gt;--exec&lt;/span&gt; &lt;span class="nv"&gt;api&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;SYNO.Docker.Project &lt;span class="nv"&gt;method&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"start_stream"&lt;/span&gt; &lt;span class="nv"&gt;version&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;1 &lt;span class="nb"&gt;id&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"9fb91ca5-d817-42f8-8ddc-2acdf4d94494"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzsifp3o39sx6rgotelib.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzsifp3o39sx6rgotelib.png" alt="Image description" width="800" height="884"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Note: Replace the &lt;code&gt;id&lt;/code&gt; value with the ID you found in the previous step.&lt;/p&gt;

&lt;p&gt;That's it! Synology will not start the Project based on the schedule that you provided.&lt;/p&gt;

</description>
      <category>synology</category>
      <category>nas</category>
      <category>docker</category>
    </item>
    <item>
      <title>Procrastination Trick - Create Large Pull Requests</title>
      <dc:creator>Dzhuneyt</dc:creator>
      <pubDate>Fri, 24 Nov 2023 10:50:17 +0000</pubDate>
      <link>https://forem.com/aws-builders/procrastination-trick-create-large-pull-requests-1m7h</link>
      <guid>https://forem.com/aws-builders/procrastination-trick-create-large-pull-requests-1m7h</guid>
      <description>&lt;p&gt;I know this is an unpopular opinion: Large PRs are a good way to mask procrastination.&lt;/p&gt;

&lt;p&gt;Change my mind. No seriously. I've yet to see a solid example of why bundling many changes in the same PR is a good thing; versus having those changes introduced in smaller PRs.&lt;/p&gt;




&lt;p&gt;Is your team suffering from the "Large PRs" syndrome? In my opinion, large PRs are hurtful in many ways:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Lower Code Quality: Due to the complexity and volume of changes in large PRs, there's a higher likelihood of compromising on code quality just to get the PR merged.&lt;/li&gt;
&lt;li&gt;Integration Issues: Large PRs often touch multiple parts of the system, which can lead to unexpected integration issues, especially if the changes aren’t properly isolated or modularized.&lt;/li&gt;
&lt;li&gt;Increased Merge Conflicts: The longer a PR remains open, the higher the chance of merge conflicts with other changes being merged into the same codebase. Resolving these conflicts can be time-consuming and error-prone.&lt;/li&gt;
&lt;li&gt;Knowledge Silos: When a single developer or a small group works on a large set of changes without regular integration, it can lead to knowledge silos, where only a few people understand certain parts of the codebase.&lt;/li&gt;
&lt;li&gt;Demotivation and Overwhelm: Reviewers might feel overwhelmed or demotivated by the sheer size of the PR, leading to less effective reviews or reluctance to review at all.&lt;/li&gt;
&lt;li&gt;Blocking Other Work: Large PRs can block other work from being merged, especially if they involve core parts of the system that other developers need to work on.&lt;/li&gt;
&lt;li&gt;Rollback Complexity: If a problem is discovered after merging a large PR, rolling back the changes can be complex and risky, especially if multiple features or fixes are entangled.&lt;/li&gt;
&lt;li&gt;Delaying user value: Large PRs delay delivering small increments of value to the end users, which is at the core of agile software development practices.&lt;/li&gt;
&lt;li&gt;Hiding procrastination: Lazy engineers can use large PRs to hide the actual time it took to work on individual parts of the code. Because it's so big, it's difficult to spot periods of procrastination somewhere in the middle. That's why - from an organization perspective - it's important to discourage large PRs.&lt;/li&gt;
&lt;/ul&gt;




&lt;p&gt;What size are the Pull Requests in your organization? Can you relate to any of these?&lt;/p&gt;

</description>
      <category>engineering</category>
      <category>codereview</category>
      <category>programming</category>
    </item>
    <item>
      <title>Cut Cloud Costs by Migrating to On-Premises. Is It That Simple?</title>
      <dc:creator>Dzhuneyt</dc:creator>
      <pubDate>Tue, 14 Nov 2023 14:38:43 +0000</pubDate>
      <link>https://forem.com/aws-builders/cut-cloud-costs-by-migrating-to-on-premises-is-it-that-simple-2a9d</link>
      <guid>https://forem.com/aws-builders/cut-cloud-costs-by-migrating-to-on-premises-is-it-that-simple-2a9d</guid>
      <description>&lt;p&gt;Let’s imagine you are a Netflix-sized company, and you are paying AWS $1 million per month on Cloud bills. Your app is composed of a bunch of Lambdas behind an API Gateway.&lt;/p&gt;

&lt;p&gt;You think $1m per month is too much infrastructure costs.&lt;/p&gt;

&lt;p&gt;After spending some engineering effort, you realize that if you re-architect your app into a monolithic container and launch it in a Kubernetes cluster in your on-premises infrastructure, you can run the same app for $100,000 per month — a 10x savings.&lt;/p&gt;

&lt;p&gt;Now, you might ask yourself — why is AWS charging you 10 times more? Couldn’t they apply the same optimization that you did, and cascade the cost savings to you as a customer?&lt;/p&gt;

&lt;p&gt;The answer is — YES. They could do that.&lt;/p&gt;

&lt;p&gt;But they would have to do the same for EVERY SINGLE CUSTOMER of AWS.&lt;/p&gt;

&lt;p&gt;Your application and its infrastructure is always unique, to some degree. There are an infinite number of configurations and fine-tuned optimizations that would be needed for every unique customer.&lt;/p&gt;

&lt;p&gt;The thing that cloud providers, like AWS, do — is they optimize for common denominators that all customers are interested in — scalability, ease and speed of redeployment, high availability — and finally costs. Cost is one of the factors, but certainly not the most important one for all customers of AWS. Maybe it’s the most important factor for your organization (and that’s fine). But some organizations value more the ease of redeployment and ease of scalability. Every organization is unique and has slightly unique needs.&lt;/p&gt;

&lt;p&gt;That’s why you can migrate to On-Premises, fine-tune and optimize better for the factors that are more important to you, whereas AWS will always try to find the middle ground and optimize for the weighted average of all AWS customers’ needs. Not for your organization’s individual needs.&lt;/p&gt;

&lt;p&gt;Does your organization fall under the “weighted average customer requirements” segment, or you are more on one side of the amplitude — putting more weight on saving costs?&lt;/p&gt;

</description>
      <category>aws</category>
      <category>cloudcomputing</category>
      <category>economy</category>
      <category>cloud</category>
    </item>
    <item>
      <title>Migrating from the Cloud to On-Premises infrastructure. Is it worth it?</title>
      <dc:creator>Dzhuneyt</dc:creator>
      <pubDate>Tue, 14 Nov 2023 10:27:22 +0000</pubDate>
      <link>https://forem.com/aws-builders/should-you-stick-to-cloud-native-or-migrate-to-on-premises-3epd</link>
      <guid>https://forem.com/aws-builders/should-you-stick-to-cloud-native-or-migrate-to-on-premises-3epd</guid>
      <description>&lt;p&gt;When a major company shares a case study about saving $1 million annually by moving from AWS cloud to an On-Premises setup, it always grabs the attention of the DevOps community.&lt;/p&gt;

&lt;p&gt;It sparks the years-old debates about cloud-native vs. on-premises and Serverless vs. serverfull.&lt;/p&gt;

&lt;p&gt;E.g. using on-demand Lambdas and DynamoDB VS always-on Docker containers and PostgreSQL/MySQL, deployed to Kubernetes.&lt;/p&gt;




&lt;p&gt;Now, it’s easy to jump on the hype train of the next article that suits your ideology better.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Maybe you have a longer engineering background of developing traditional applications using containers, monolithic applications, and SQL databases — you might be more biased and inclined to support such a Cloud to On-Premises migration.&lt;/li&gt;
&lt;li&gt;But if you are coming from the Serverless world, and you prefer the simplicity of NOT managing servers and predictable pricing (pricing that goes hand in hand with usage) — you might be a bit more supportive of the Cloud-native infrastructure.
And there’s no right or wrong answer. It’s just engineering perspectives.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;A common problem, especially for large organizations, is that such decisions are usually based on this — engineering perspectives, rather than business use cases.&lt;/p&gt;

&lt;p&gt;E.g. can the app benefit from the flexible scalability and predictable pricing of being cloud-native? Maybe you should pick that option then. Does it need to be on 24/7 and can’t tolerate cold starts? Maybe an always-on server is the better option here.&lt;/p&gt;

&lt;p&gt;And remember — the grass is always greener on the other side. But once you step into the weeds — you quickly realize — that serverless and serverful are both far from perfect options, same as cloud-native and on-premises.&lt;/p&gt;

&lt;p&gt;In an ideal world, you would combine the strengths of both sides. So the next time you are thinking of architecture for your app — think about how you can leverage both.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>cloud</category>
    </item>
    <item>
      <title>Security: What is "Privilege Escalation"?</title>
      <dc:creator>Dzhuneyt</dc:creator>
      <pubDate>Thu, 27 Jul 2023 16:12:15 +0000</pubDate>
      <link>https://forem.com/dzhuneyt/what-is-privilege-escalation-4hmj</link>
      <guid>https://forem.com/dzhuneyt/what-is-privilege-escalation-4hmj</guid>
      <description>&lt;p&gt;Privilege escalation is a common term in the Security industry.&lt;/p&gt;

&lt;p&gt;Let's illustrate what it means through an example.&lt;/p&gt;

&lt;p&gt;Imagine having a key to your house and you give it temporarily to a plumber, so that they can fix something while you are on vacation.&lt;/p&gt;

&lt;p&gt;Your intent is to give temporary access to your house to the plumber. But the locksmith visits a locksmith and makes a copy of the key. Essentially, evading the temporary restriction of accessing your house within a limited timeframe.&lt;/p&gt;

&lt;p&gt;The same concept can be applied to software security. It's particularly relevant in Cloud security and giving access to some service to access your Cloud account (e.g. temporary access to assume an IAM role within your AWS account).&lt;/p&gt;

&lt;p&gt;If the service is limited to just access resources, but not create new resources - everything is fine and security works as intended. But if the limited access allows the service to create new IAM roles (essentially generate new keys at the locksmith), the service can later access your Cloud resources without your permission. Essentially, doing Privilege Escalation.&lt;/p&gt;

</description>
      <category>cloud</category>
      <category>security</category>
    </item>
    <item>
      <title>AWS EFS Elastic vs Burstable throughput (benchmark)</title>
      <dc:creator>Dzhuneyt</dc:creator>
      <pubDate>Thu, 29 Dec 2022 23:19:06 +0000</pubDate>
      <link>https://forem.com/aws-builders/aws-efs-elastic-vs-burstable-throughput-benchmark-47b8</link>
      <guid>https://forem.com/aws-builders/aws-efs-elastic-vs-burstable-throughput-benchmark-47b8</guid>
      <description>&lt;p&gt;With the recent &lt;a href="https://aws.amazon.com/blogs/aws/new-announcing-amazon-efs-elastic-throughput/" rel="noopener noreferrer"&gt;announcement of AWS EFS Elastic Throughput mode&lt;/a&gt;, I was curious to understand if it's actually any better than the Burstable throughput mode, which I've used as the file storage for a few of my WordPress sites.&lt;/p&gt;

&lt;p&gt;I was encountering a few hiccups here and there during WordPress version or plugin upgrades, because of the way the Burstable throughput mode works. Basically as long as your app is not doing any IO operations, the EFS accumulates some burst credits, which are then used during periods of reads/writes, up to a certain limit, after which (when the burst credits are depleted) the EFS read/write operations become painfully slow (at least from my experience).&lt;/p&gt;

&lt;p&gt;The announcement of Elastic throughput mode promises that you no longer have to worry about unpredictability of reads/writes to the file system and you should get a pretty consistent performance when using EFS, without resorting to Provisioned throughput mode, which can be pretty &lt;a href="https://aws.amazon.com/efs/pricing/" rel="noopener noreferrer"&gt;expensive&lt;/a&gt; due to over-provisioning during prolonged periods of low IO activity.&lt;/p&gt;

&lt;p&gt;What better way to evaluate and compare two options than creating a benchmark that does it programmatically for me. Here are the results.&lt;/p&gt;

&lt;h1&gt;
  
  
  Benchmark results
&lt;/h1&gt;

&lt;p&gt;&lt;em&gt;Writing 10 files, 1KB each&lt;/em&gt;&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Elastic&lt;/th&gt;
&lt;th&gt;Bursting&lt;/th&gt;
&lt;th&gt;Time difference&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;95.2ms&lt;/td&gt;
&lt;td&gt;100.6ms&lt;/td&gt;
&lt;td&gt;-5.37%&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;&lt;em&gt;Writing 10 files, 1MB each&lt;/em&gt;&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Elastic&lt;/th&gt;
&lt;th&gt;Bursting&lt;/th&gt;
&lt;th&gt;Time difference&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;366.2ms&lt;/td&gt;
&lt;td&gt;369.8ms&lt;/td&gt;
&lt;td&gt;-0.97%&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;&lt;em&gt;Writing 10 files, 100MB each&lt;/em&gt;&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Elastic&lt;/th&gt;
&lt;th&gt;Bursting&lt;/th&gt;
&lt;th&gt;Time difference&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;12.161s&lt;/td&gt;
&lt;td&gt;17.081s&lt;/td&gt;
&lt;td&gt;-28.80%&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h1&gt;
  
  
  Results breakdown
&lt;/h1&gt;

&lt;p&gt;Elastic throughput seems to be completing the write operations faster in all of the benchmarks above, compared to Bursting throughput mode.&lt;/p&gt;

&lt;p&gt;For simple sporadic file writes it seems like there is not much of a difference, but Elastic throughput really starts to show its benefits in larger file sizes. Writing 10 files of 100MB can easily save your app 5 seconds of waiting time; savings you can potentially propagate to your end users and improve user experience.&lt;/p&gt;

&lt;p&gt;Long story short, I am definitely switching my existing EFS file systems to Elastic throughput mode after these results. The pricing is pretty much the same and there's nothing that stops me from doing the switch at this point.&lt;/p&gt;

&lt;p&gt;Of course, don't take my word for it. Do your own due diligence and benchmarks before making a similar switch.&lt;/p&gt;

&lt;h1&gt;
  
  
  Considerations
&lt;/h1&gt;

&lt;ul&gt;
&lt;li&gt;The tests were done using an identical Lambda with EFS file system attached&lt;/li&gt;
&lt;li&gt;The two Lambdas ran exactly the same code&lt;/li&gt;
&lt;li&gt;The numbers above are adjusted to exclude potential side effects like Lambda cold starts, network latency and variability in any surrounding code inside the Lambda runtime. Timestamps are only snapshotted just before and right after the filesystem IO.&lt;/li&gt;
&lt;li&gt;Tests are repeated 5 times with a sleep time of 10 seconds in between each run, to give plenty of time for both EFS throughput modes to pick up the pace and trigger any internal caching or warming mechanisms that EFS might have. The results of all 5 test are averaged to come up with the numbers in the benchmark.&lt;/li&gt;
&lt;li&gt;Code used to benchmark is available in a &lt;a href="https://github.com/awesome-cdk/experiment-efs-elastic-throughput" rel="noopener noreferrer"&gt;GitHub repo&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  Conclusion
&lt;/h1&gt;

&lt;p&gt;Hope you found this benchmark useful. Looking forward to read your findings in the comments below. You can also catch me at my &lt;a href="https://aws-cdk.com" rel="noopener noreferrer"&gt;AWS CDK blog&lt;/a&gt;, where you can learn more about corner cases like this or find interesting AWS CDK constructs you can use for your app infrastructure.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>efs</category>
      <category>benchmark</category>
      <category>performance</category>
    </item>
  </channel>
</rss>
