<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: sanjay yadav</title>
    <description>The latest articles on Forem by sanjay yadav (@sanjay_yadav_).</description>
    <link>https://forem.com/sanjay_yadav_</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/sanjay_yadav_"/>
    <language>en</language>
    <item>
      <title>Stop Overpaying for GP2: GP3 Cost &amp; Performance Explained</title>
      <dc:creator>sanjay yadav</dc:creator>
      <pubDate>Sat, 09 May 2026 04:20:02 +0000</pubDate>
      <link>https://forem.com/sanjay_yadav_/stop-overpaying-for-gp2-gp3-cost-performance-explained-519p</link>
      <guid>https://forem.com/sanjay_yadav_/stop-overpaying-for-gp2-gp3-cost-performance-explained-519p</guid>
      <description>&lt;p&gt;Most people still use GP2 as the default EBS volume type in launch templates and infrastructure modules.&lt;/p&gt;

&lt;p&gt;Not because it is still the best option today — but because it has existed in infrastructure templates for years.&lt;/p&gt;

&lt;p&gt;In most production audits I review, oversized GP2 volumes are still one of the easiest cost optimizations left untouched.&lt;/p&gt;

&lt;p&gt;The real issue is that GP2 ties storage capacity directly to performance. Under sustained workloads, that coupling starts becoming operationally expensive very quickly.&lt;/p&gt;

&lt;p&gt;GP3 changes this model entirely.&lt;/p&gt;

&lt;p&gt;Instead of scaling performance through storage size, GP3 separates performance from capacity and gives much more predictable behavior under real workloads.&lt;/p&gt;

&lt;p&gt;To understand why this matters operationally, it helps to look at how both architectures actually behave.&lt;/p&gt;




&lt;h1&gt;
  
  
  How GP2 Was Designed
&lt;/h1&gt;

&lt;p&gt;GP2 follows a size-based performance model:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;IOPS &lt;span class="o"&gt;=&lt;/span&gt; 3 × Volume Size &lt;span class="o"&gt;(&lt;/span&gt;GB&lt;span class="o"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Key characteristics:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Minimum 100 IOPS&lt;/li&gt;
&lt;li&gt;Burst up to 3,000 IOPS&lt;/li&gt;
&lt;li&gt;Maximum 16,000 IOPS&lt;/li&gt;
&lt;li&gt;Credit-based burst model&lt;/li&gt;
&lt;li&gt;Throughput indirectly tied to volume size&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The original design made sense at the time.&lt;/p&gt;

&lt;p&gt;As storage scaled, performance scaled with it.&lt;/p&gt;

&lt;p&gt;But modern workloads are more sustained and much less tolerant of performance variability.&lt;/p&gt;




&lt;h1&gt;
  
  
  The Credit-Based Burst Problem
&lt;/h1&gt;

&lt;p&gt;GP2 relies on a token bucket credit system.&lt;/p&gt;

&lt;p&gt;When workloads exceed baseline performance:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Burst credits are consumed&lt;/li&gt;
&lt;li&gt;IOPS temporarily increases&lt;/li&gt;
&lt;li&gt;Credits eventually deplete&lt;/li&gt;
&lt;li&gt;Performance drops back to baseline&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This is where production instability starts showing up.&lt;/p&gt;

&lt;p&gt;Short benchmarks often look fine.&lt;/p&gt;

&lt;p&gt;But under sustained write-heavy workloads, latency starts increasing once credits are exhausted.&lt;/p&gt;

&lt;p&gt;In CloudWatch, this usually appears as:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Rising &lt;code&gt;VolumeQueueLength&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Increasing &lt;code&gt;VolumeWriteLatency&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Variability in &lt;code&gt;VolumeIdleTime&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The problem is not peak performance.&lt;/p&gt;

&lt;p&gt;The problem is maintaining predictable performance under continuous pressure.&lt;/p&gt;




&lt;h1&gt;
  
  
  GP3 Changes the Architecture Completely
&lt;/h1&gt;

&lt;p&gt;GP3 removes the dependency between storage size and performance.&lt;/p&gt;

&lt;p&gt;Every GP3 volume includes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;3,000 IOPS baseline&lt;/li&gt;
&lt;li&gt;125 MB/s throughput baseline&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;And you can provision independently:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Up to 16,000 IOPS&lt;/li&gt;
&lt;li&gt;Up to 1,000 MB/s throughput&lt;/li&gt;
&lt;li&gt;Up to 64 TiB storage&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;There is no burst-credit behavior.&lt;/p&gt;

&lt;p&gt;Performance becomes deterministic instead of temporary.&lt;/p&gt;




&lt;h1&gt;
  
  
  Why This Matters Operationally
&lt;/h1&gt;

&lt;p&gt;GP2 scales performance indirectly through storage size.&lt;/p&gt;

&lt;p&gt;GP3 scales performance directly based on workload requirements.&lt;/p&gt;

&lt;p&gt;That difference becomes extremely important for:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;production databases&lt;/li&gt;
&lt;li&gt;sustained write-heavy workloads&lt;/li&gt;
&lt;li&gt;indexing systems&lt;/li&gt;
&lt;li&gt;search clusters&lt;/li&gt;
&lt;li&gt;high-transaction services&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In many cases, teams are effectively paying for extra storage capacity only to achieve the IOPS they need.&lt;/p&gt;




&lt;h1&gt;
  
  
  Real Cost Comparison
&lt;/h1&gt;

&lt;h3&gt;
  
  
  Requirement
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;6,000 sustained IOPS&lt;/li&gt;
&lt;li&gt;500 GB usable storage&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  GP2
&lt;/h2&gt;

&lt;p&gt;To achieve 6,000 baseline IOPS:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;6000 ÷ 3 &lt;span class="o"&gt;=&lt;/span&gt; 2000 GB
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Approximate monthly cost:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;2 TB × ~&lt;span class="nv"&gt;$0&lt;/span&gt;.10/GB ≈ ~&lt;span class="nv"&gt;$200&lt;/span&gt;/month
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Most of that storage capacity exists only to unlock performance.&lt;/p&gt;




&lt;h2&gt;
  
  
  GP3
&lt;/h2&gt;

&lt;p&gt;Provision directly:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;500 GB storage&lt;/li&gt;
&lt;li&gt;6,000 IOPS&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Approximate monthly cost:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Storage: ~$40&lt;/li&gt;
&lt;li&gt;Additional IOPS: ~$15&lt;/li&gt;
&lt;li&gt;Throughput adjustment: ~$5–10&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Estimated total:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;~&lt;span class="nv"&gt;$60&lt;/span&gt;–65/month
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once this scales across multiple workloads and environments, the savings become very noticeable.&lt;/p&gt;




&lt;h1&gt;
  
  
  Migration Is Surprisingly Simple
&lt;/h1&gt;

&lt;p&gt;In most environments, moving from GP2 to GP3 is operationally straightforward:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;aws ec2 modify-volume &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--volume-id&lt;/span&gt; vol-xxxxxxxx &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--volume-type&lt;/span&gt; gp3
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In many cases:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;no downtime required&lt;/li&gt;
&lt;li&gt;no IAM changes required&lt;/li&gt;
&lt;li&gt;monitoring stays identical&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Operationally, the migration is small.&lt;/p&gt;

&lt;p&gt;From a performance predictability perspective, the impact is significant.&lt;/p&gt;




&lt;h1&gt;
  
  
  When GP3 Should Be the Default
&lt;/h1&gt;

&lt;p&gt;GP3 makes more sense for:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Production databases&lt;/li&gt;
&lt;li&gt;Search systems&lt;/li&gt;
&lt;li&gt;Sustained workloads&lt;/li&gt;
&lt;li&gt;Write-heavy applications&lt;/li&gt;
&lt;li&gt;Cost optimization initiatives&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;GP2 mainly survives today because older infrastructure templates still default to it.&lt;/p&gt;




&lt;h1&gt;
  
  
  Final Thoughts
&lt;/h1&gt;

&lt;p&gt;GP2 ties performance to storage size and depends heavily on burst behavior.&lt;/p&gt;

&lt;p&gt;GP3 separates capacity from performance and delivers much more predictable sustained I/O.&lt;/p&gt;

&lt;p&gt;For most modern production workloads, GP3 should probably be the default starting point — not the upgrade path.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>devops</category>
      <category>cloud</category>
      <category>performance</category>
    </item>
    <item>
      <title>T3 vs T2 EC2: Save Costs and Avoid CPU Throttling in AWS</title>
      <dc:creator>sanjay yadav</dc:creator>
      <pubDate>Fri, 08 May 2026 04:20:50 +0000</pubDate>
      <link>https://forem.com/sanjay_yadav_/t3-vs-t2-ec2-save-costs-and-avoid-cpu-throttling-in-aws-3m2o</link>
      <guid>https://forem.com/sanjay_yadav_/t3-vs-t2-ec2-save-costs-and-avoid-cpu-throttling-in-aws-3m2o</guid>
      <description>&lt;h2&gt;
  
  
  **Why This Comparison Still Matters
&lt;/h2&gt;

&lt;p&gt;****In many AWS accounts we still see T2 instances running production workloads. Most of the time, this isn’t by choice. It’s usually inheritance — old templates, old AMIs, or “this is what we launched years ago”.&lt;/p&gt;

&lt;p&gt;The issue here is: you are likely paying more money for worse performance.---&lt;/p&gt;

&lt;h2&gt;
  
  
  **How Burstable EC2 Instances Originally Worked
&lt;/h2&gt;

&lt;p&gt;**&lt;br&gt;
When AWS introduced T2 instances, burstable compute was a solid idea.&lt;/p&gt;

&lt;p&gt;You received:&lt;br&gt;
A low baseline CPU&lt;br&gt;
CPU credits earned during idle time&lt;br&gt;
The ability to burst when needed&lt;br&gt;
For small, spiky workloads, this model worked well enough at the time. T2 became the default for running applications on top of ec2 instances.---&lt;/p&gt;

&lt;h2&gt;
  
  
  **Where T2 Starts to Fall Apart
&lt;/h2&gt;

&lt;p&gt;**&lt;br&gt;
Over time, several issues became obvious.&lt;br&gt;
In real environments:&lt;br&gt;
Baseline CPU performance is low&lt;br&gt;
CPU credits accumulate slowly&lt;br&gt;
Unlimited bursting costs extra&lt;br&gt;
Performance becomes unpredictable under sustained load&lt;br&gt;
In most teams, this shows up as mysterious CPU throttling, even though the instance size “should” be enough.---&lt;/p&gt;

&lt;h2&gt;
  
  
  **What Changed With T3
&lt;/h2&gt;

&lt;p&gt;**&lt;br&gt;
T3 instances are not just a pricing refresh.&lt;br&gt;
They are built on AWS Nitro, which fundamentally improves how compute is delivered.&lt;br&gt;
The result is:&lt;br&gt;
Higher baseline CPU&lt;br&gt;
Faster CPU credit earning&lt;br&gt;
More consistent performance&lt;br&gt;
Lower cost for equivalent instance sizes&lt;br&gt;
This is why AWS positions T3 as the replacement for T2.---&lt;/p&gt;

&lt;h2&gt;
  
  
  **How CPU Credits Differ Internally
&lt;/h2&gt;

&lt;p&gt;**&lt;br&gt;
CPU Credit Behavior (High Level)&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1raxanm6xlz1pw357krw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1raxanm6xlz1pw357krw.png" alt=" " width="800" height="533"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  The key difference is not just bursting — it’s how quickly credits are earned and how predictable performance is once credits are spent.
&lt;/h2&gt;

&lt;h2&gt;
  
  
  Practical Cost Reality
&lt;/h2&gt;

&lt;p&gt;For the same workload:&lt;br&gt;
T2 often ends up more expensive&lt;br&gt;
T3 delivers better throughput&lt;br&gt;
Unlimited bursting on T2 adds surprise costs&lt;br&gt;
Once sustained CPU usage enters the picture, T2 stops being economical very quickly.---&lt;/p&gt;

&lt;h2&gt;
  
  
  Practical Example
&lt;/h2&gt;

&lt;p&gt;If you are running:&lt;br&gt;
APIs&lt;br&gt;
Background workers&lt;br&gt;
Small application servers&lt;br&gt;
Internal tooling&lt;br&gt;
T3 will almost always behave better under real traffic patterns.&lt;br&gt;
In practice, many teams switch to T3 and immediately notice fewer CPU-related alerts without changing instance size.---&lt;/p&gt;

&lt;h2&gt;
  
  
  **Operational Considerations
&lt;/h2&gt;

&lt;p&gt;**&lt;br&gt;
There is no special configuration required to use T3.&lt;br&gt;
From an operations perspective:&lt;br&gt;
IAM remains unchanged&lt;br&gt;
Monitoring stays the same&lt;br&gt;
Autoscaling behavior improves due to more stable CPU usage&lt;/p&gt;

&lt;h2&gt;
  
  
  This makes T3 a low-risk upgrade in most environments.
&lt;/h2&gt;

&lt;p&gt;**&lt;/p&gt;

&lt;h2&gt;
  
  
  Migration Guidance
&lt;/h2&gt;

&lt;p&gt;Migrating from T2 to T3 is usually straightforward:&lt;/p&gt;

&lt;p&gt;Ensure your AMI supports Nitro-based instances&lt;br&gt;
Launch a T3 instance of the same size&lt;br&gt;
Observe CPU credit behavior and latency&lt;br&gt;
Roll out gradually if the workload is critical&lt;/p&gt;

&lt;h2&gt;
  
  
  The biggest blocker is often legacy AMIs, not application behavior.
&lt;/h2&gt;

&lt;h2&gt;
  
  
  When to Use T3 vs When Not To
&lt;/h2&gt;

&lt;p&gt;Use T3 When:&lt;br&gt;
Launching new workloads&lt;br&gt;
Running general-purpose services&lt;br&gt;
Cost efficiency matters&lt;br&gt;
You want predictable burst behavior&lt;br&gt;
Use T2 Only When:&lt;br&gt;
You are stuck with legacy AMIs&lt;br&gt;
Nitro is not supported&lt;br&gt;
Migration is temporarily impossible&lt;br&gt;
Even then, T2 should be treated as a short-term compromise.&lt;br&gt;
**&lt;/p&gt;

&lt;h2&gt;
  
  
  FAQ
&lt;/h2&gt;

&lt;p&gt;**&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Is T3 cheaper than T2 in AWS EC2?
In most real-world scenarios, T3 instances are more cost-efficient than T2. They offer improved baseline performance and flexible CPU credit usage, which helps reduce unexpected costs.&lt;/li&gt;
&lt;li&gt;What are CPU credits in T2 and T3 instances?
CPU credits allow burstable instances to handle temporary spikes in usage. T2 instances have stricter credit limits, while T3 provides more flexible credit options, making it better for variable workloads.&lt;/li&gt;
&lt;li&gt;When should I choose T3 over T2?
You should choose T3 when your workload requires consistent performance or occasional bursts. T3 is generally better for modern applications due to its cost-performance balance.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  **Read More on KubeBlogs
&lt;/h2&gt;

&lt;p&gt;**&lt;br&gt;
If you're exploring DevOps, Kubernetes, and cloud infrastructure, these guides will help you go deeper:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.kubeblogs.com/how-civo-kubernetes-routes-pod-traffic-single-egress-ip-explained/" rel="noopener noreferrer"&gt;How Kubernetes Routes Pod Traffic with a Single Egress IP&lt;/a&gt;&lt;br&gt;
&lt;a href="https://www.kubeblogs.com/gp3-vs-gp2-ebs-volume-aws/" rel="noopener noreferrer"&gt;GP3 vs GP2 EBS Volumes: Performance and Cost Comparison&lt;/a&gt;&lt;br&gt;
&lt;a href="https://www.kubeblogs.com/self-hosted-github-actions-runner/" rel="noopener noreferrer"&gt;How to Set Up a Self-Hosted GitHub Actions Runner&lt;/a&gt;&lt;br&gt;
These articles cover Kubernetes networking, AWS storage optimization, and CI/CD infrastructure — useful when scaling beyond local development environments.&lt;/p&gt;

&lt;h2&gt;
  
  
  **Conclusion
&lt;/h2&gt;

&lt;p&gt;**&lt;br&gt;
T2 instances are effectively legacy at this point.&lt;br&gt;
T3 instances are cheaper, faster, and operationally simpler.&lt;/p&gt;

&lt;p&gt;For anything new, T3 should be the default.&lt;br&gt;
T2 should exist only where history forces it to.&lt;/p&gt;

&lt;p&gt;That single decision can quietly reduce cost and eliminate an entire class of performance issues.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>ec2</category>
      <category>performance</category>
      <category>cloud</category>
    </item>
    <item>
      <title>A Lot of AWS Users Still Manage SSH Keys the Hard Way</title>
      <dc:creator>sanjay yadav</dc:creator>
      <pubDate>Thu, 07 May 2026 04:48:27 +0000</pubDate>
      <link>https://forem.com/sanjay_yadav_/a-lot-of-aws-users-still-manage-ssh-keys-the-hard-way-h4c</link>
      <guid>https://forem.com/sanjay_yadav_/a-lot-of-aws-users-still-manage-ssh-keys-the-hard-way-h4c</guid>
      <description>&lt;p&gt;I still come across a lot of EC2 setups where a brand-new SSH key pair gets created every time a new instance is launched.&lt;/p&gt;

&lt;p&gt;It works, but over time it also becomes difficult to manage — especially across multiple environments, engineers, or temporary workloads.&lt;/p&gt;

&lt;p&gt;What’s easy to miss is that AWS already lets you import an existing public SSH key directly as an EC2 key pair.&lt;/p&gt;

&lt;p&gt;That means you can keep using the same SSH workflow you already manage locally instead of constantly juggling separate keys for different instances.&lt;/p&gt;

&lt;p&gt;It’s a simple improvement, but it makes SSH access much easier to keep consistent.&lt;/p&gt;

&lt;p&gt;I found this breakdown useful because it explains the process clearly without overcomplicating it:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.kubeblogs.com/how-to-import-an-existing-ssh-key-to-aws-as-a-key-pair/" rel="noopener noreferrer"&gt;https://www.kubeblogs.com/how-to-import-an-existing-ssh-key-to-aws-as-a-key-pair/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Worth keeping in mind if you're trying to simplify SSH access management across AWS environments.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>devops</category>
      <category>cloud</category>
      <category>security</category>
    </item>
    <item>
      <title>EC2 vs Fargate Isn’t Really About Cost</title>
      <dc:creator>sanjay yadav</dc:creator>
      <pubDate>Wed, 06 May 2026 05:06:24 +0000</pubDate>
      <link>https://forem.com/sanjay_yadav_/ec2-vs-fargate-isnt-really-about-cost-21oa</link>
      <guid>https://forem.com/sanjay_yadav_/ec2-vs-fargate-isnt-really-about-cost-21oa</guid>
      <description>&lt;p&gt;Most comparisons between EC2 and Fargate usually come down to pricing and operational overhead.&lt;/p&gt;

&lt;p&gt;Which one is cheaper.&lt;br&gt;
Which one scales better.&lt;br&gt;
Which one reduces infrastructure management.&lt;/p&gt;

&lt;p&gt;But I came across an interesting point recently — the bigger tradeoff is often about control.&lt;/p&gt;

&lt;p&gt;With EC2, you manage the underlying infrastructure yourself. That gives you deeper visibility, more customization, and flexibility around how workloads are configured and operated.&lt;/p&gt;

&lt;p&gt;Fargate changes that model quite a bit.&lt;/p&gt;

&lt;p&gt;You stop worrying about nodes, patching, or capacity management, which simplifies operations significantly. At the same time, you also give up some control over the environment underneath the workload.&lt;/p&gt;

&lt;p&gt;That shift impacts more than infrastructure management alone. It changes how teams think about deployments, observability, scaling, and day-to-day operations.&lt;/p&gt;

&lt;p&gt;What I liked about this breakdown is that it doesn’t try to position one as universally better. It explains where EC2 still makes sense, where Fargate fits better, and why the decision depends heavily on workload and team priorities.&lt;/p&gt;

&lt;p&gt;Worth reading if you’re currently evaluating container infrastructure on AWS:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.kubeblogs.com/ec2-or-fargate/" rel="noopener noreferrer"&gt;https://www.kubeblogs.com/ec2-or-fargate/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Curious what others are seeing in practice — are teams moving more toward managed infrastructure now, or still preferring the control that comes with EC2?&lt;/p&gt;

</description>
      <category>aws</category>
      <category>devops</category>
      <category>kubernetes</category>
      <category>containers</category>
    </item>
    <item>
      <title>AWS Cost Isn’t Just Finance — It’s an Engineering Problem</title>
      <dc:creator>sanjay yadav</dc:creator>
      <pubDate>Tue, 05 May 2026 04:24:00 +0000</pubDate>
      <link>https://forem.com/sanjay_yadav_/aws-cost-isnt-just-finance-its-an-engineering-problem-325d</link>
      <guid>https://forem.com/sanjay_yadav_/aws-cost-isnt-just-finance-its-an-engineering-problem-325d</guid>
      <description>&lt;p&gt;Most teams treat cloud cost as a finance problem.&lt;/p&gt;

&lt;p&gt;But the root cause is usually engineering.&lt;/p&gt;

&lt;p&gt;Bills spike, dashboards grow, alerts fire — but the underlying issue rarely gets fixed.&lt;/p&gt;

&lt;p&gt;That idea stood out to me while reading about an approach where AWS cost was handled like an SRE problem — using the same mindset applied to reliability and performance.&lt;/p&gt;

&lt;p&gt;Instead of asking “why is the bill high?”, the focus shifts to:&lt;/p&gt;

&lt;p&gt;Where does unnecessary spend originate in the system?&lt;br&gt;
What patterns keep repeating in cost spikes?&lt;br&gt;
Which design decisions are driving ongoing waste?&lt;/p&gt;

&lt;p&gt;What stands out is how closely this aligns with SRE thinking:&lt;/p&gt;

&lt;p&gt;Treat cost as a measurable signal&lt;br&gt;
Focus on root causes, not symptoms&lt;br&gt;
Continuously optimize instead of reacting&lt;/p&gt;

&lt;p&gt;It changes the conversation from cost-cutting to system design.&lt;/p&gt;

&lt;p&gt;This article breaks it down clearly with real examples:&lt;br&gt;
&lt;a href="https://www.kubeblogs.com/we-treated-aws-cost-like-an-sre-problem-heres-the-result/" rel="noopener noreferrer"&gt;https://www.kubeblogs.com/we-treated-aws-cost-like-an-sre-problem-heres-the-result/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Do you treat cloud cost as a finance metric, or as an engineering problem?&lt;/p&gt;

</description>
      <category>aws</category>
      <category>devops</category>
      <category>sre</category>
      <category>cloud</category>
    </item>
    <item>
      <title>Most AWS Security Setups Ignore This One Outbound Risk</title>
      <dc:creator>sanjay yadav</dc:creator>
      <pubDate>Mon, 04 May 2026 05:43:41 +0000</pubDate>
      <link>https://forem.com/sanjay_yadav_/youre-securing-inbound-traffic-in-aws-but-what-about-outbound-3bli</link>
      <guid>https://forem.com/sanjay_yadav_/youre-securing-inbound-traffic-in-aws-but-what-about-outbound-3bli</guid>
      <description>&lt;p&gt;Most AWS security setups focus heavily on inbound traffic.&lt;/p&gt;

&lt;p&gt;But outbound is often left open.&lt;/p&gt;

&lt;p&gt;Security Groups. NACLs. Maybe WAF.&lt;br&gt;
That’s usually where the effort goes.&lt;/p&gt;

&lt;p&gt;But outbound traffic often gets far less attention — and that’s where problems begin.&lt;/p&gt;

&lt;p&gt;Every outbound request starts with a DNS query.&lt;/p&gt;

&lt;p&gt;Before your application connects anywhere, it first resolves a domain name. That step is easy to ignore, but it’s where a lot of risk begins.&lt;/p&gt;

&lt;p&gt;If something inside your VPC reaches a malicious domain, the communication already starts at the DNS level.&lt;/p&gt;

&lt;p&gt;This is where DNS-level control starts to matter.&lt;/p&gt;

&lt;p&gt;Route 53 DNS Firewall gives you control before traffic even reaches an IP.&lt;/p&gt;

&lt;p&gt;You can:&lt;/p&gt;

&lt;p&gt;Allow trusted domains&lt;br&gt;
Block known malicious domains&lt;br&gt;
Monitor suspicious queries&lt;/p&gt;

&lt;p&gt;What’s often overlooked is where this control actually sits.&lt;/p&gt;

&lt;p&gt;It operates within the VPC resolver path, separate from Security Groups and NACLs.&lt;br&gt;
So it doesn’t replace them — it fills a gap they don’t cover.&lt;/p&gt;

&lt;p&gt;It’s a small addition, but it changes how you think about egress security.&lt;/p&gt;

&lt;p&gt;Instead of reacting later, you control traffic at the first step.&lt;/p&gt;

&lt;p&gt;This article explains the setup and real use cases clearly:&lt;br&gt;
&lt;a href="https://www.kubeblogs.com/route-53-dns-firewall-aws-egress-security/" rel="noopener noreferrer"&gt;https://www.kubeblogs.com/route-53-dns-firewall-aws-egress-security/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;How are you handling egress security today — only with Security Groups, or adding DNS-level controls as well?&lt;/p&gt;

</description>
      <category>aws</category>
      <category>devops</category>
      <category>cloud</category>
      <category>networking</category>
    </item>
    <item>
      <title>Run Multiple AI Agents Locally Without Docker (Using Git Worktrees)</title>
      <dc:creator>sanjay yadav</dc:creator>
      <pubDate>Fri, 01 May 2026 05:54:21 +0000</pubDate>
      <link>https://forem.com/sanjay_yadav_/running-parallel-ai-agents-locally-using-git-worktrees-a-practical-setup-105o</link>
      <guid>https://forem.com/sanjay_yadav_/running-parallel-ai-agents-locally-using-git-worktrees-a-practical-setup-105o</guid>
      <description>&lt;p&gt;Running multiple AI agents locally sounds simple — until you actually try to manage them.&lt;/p&gt;

&lt;p&gt;Each agent needs its own context, its own branch, and often its own environment. The obvious approach is to duplicate repositories or constantly switch branches, but that quickly becomes difficult to manage.&lt;/p&gt;

&lt;p&gt;I ran into this while experimenting with parallel workflows, and the setup started breaking down faster than expected.&lt;/p&gt;

&lt;p&gt;The approach that worked well for me was using Git worktrees.&lt;/p&gt;

&lt;p&gt;Instead of cloning the same repository multiple times, worktrees allow you to create separate working directories from a single repository. Each one can operate independently with its own branch, which makes it much easier to run parallel processes.&lt;/p&gt;

&lt;p&gt;In practice, this helps with:&lt;/p&gt;

&lt;p&gt;Running multiple agents at the same time without conflicts&lt;br&gt;
Keeping environments isolated without duplicating repos&lt;br&gt;
Switching between contexts without disrupting ongoing work&lt;/p&gt;

&lt;p&gt;It’s a simple idea, but it makes a noticeable difference when working with parallel AI workflows locally.&lt;/p&gt;

&lt;p&gt;I’ve put together a step-by-step setup with commands and a working example here:&lt;br&gt;
&lt;a href="https://www.kubeblogs.com/run-parallel-ai-agents-git-worktrees-local-setup/" rel="noopener noreferrer"&gt;https://www.kubeblogs.com/run-parallel-ai-agents-git-worktrees-local-setup/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;How are you currently managing multiple AI workflows locally?&lt;/p&gt;

</description>
      <category>ai</category>
      <category>productivity</category>
      <category>git</category>
      <category>devops</category>
    </item>
    <item>
      <title>Why GCP Prometheus Doesn’t Work with Grafana (And How to Fix It)</title>
      <dc:creator>sanjay yadav</dc:creator>
      <pubDate>Thu, 30 Apr 2026 04:46:10 +0000</pubDate>
      <link>https://forem.com/sanjay_yadav_/connecting-gcp-managed-prometheus-to-grafana-isnt-as-straightforward-as-it-looks-4i</link>
      <guid>https://forem.com/sanjay_yadav_/connecting-gcp-managed-prometheus-to-grafana-isnt-as-straightforward-as-it-looks-4i</guid>
      <description>&lt;p&gt;I thought connecting Grafana to GCP Managed Prometheus would take 10 minutes.&lt;/p&gt;

&lt;p&gt;It didn’t.&lt;/p&gt;

&lt;p&gt;With a normal Prometheus setup, you just point Grafana to an endpoint and start querying metrics. But GCP Managed Prometheus works differently, and that’s where things get confusing.&lt;/p&gt;

&lt;p&gt;Grafana was connected, but no metrics were showing — which made debugging frustrating.&lt;/p&gt;

&lt;p&gt;Instead of a direct connection, you’re dealing with Google Cloud APIs, IAM permissions, and authentication layers. Small misconfigurations can break everything.&lt;/p&gt;

&lt;p&gt;After digging into it, I realized most issues come down to authentication and endpoint configuration.&lt;/p&gt;

&lt;p&gt;To simplify things, I set up Grafana using Docker Compose and worked through it step by step.&lt;/p&gt;

&lt;p&gt;This approach helped me:&lt;/p&gt;

&lt;p&gt;Run Grafana in a controlled environment&lt;br&gt;
Fix authentication issues step by step&lt;br&gt;
Confirm whether metrics were actually being returned&lt;/p&gt;

&lt;p&gt;Once everything was aligned, the setup worked as expected.&lt;/p&gt;

&lt;p&gt;If you’re working with GKE, Kubernetes monitoring, or managed observability on GCP, this is something you’ll likely run into.&lt;/p&gt;

&lt;p&gt;I documented the full setup with a working example and exact configuration here:&lt;br&gt;
&lt;a href="https://www.kubeblogs.com/how-to-connect-google-cloud-managed-prometheus-to-grafana-using-docker-compose-2025-setup-guide/" rel="noopener noreferrer"&gt;https://www.kubeblogs.com/how-to-connect-google-cloud-managed-prometheus-to-grafana-using-docker-compose-2025-setup-guide/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;What part took you the most time — authentication, configuration, or getting metrics to show up?&lt;/p&gt;

</description>
      <category>prometheus</category>
      <category>kubernetes</category>
      <category>gcp</category>
      <category>grafana</category>
    </item>
    <item>
      <title>AWS Storage Performance Costs More Than You Think (A Real Comparison with Civo)</title>
      <dc:creator>sanjay yadav</dc:creator>
      <pubDate>Wed, 29 Apr 2026 09:34:29 +0000</pubDate>
      <link>https://forem.com/sanjay_yadav_/aws-storage-performance-costs-more-than-you-think-a-real-comparison-with-civo-228n</link>
      <guid>https://forem.com/sanjay_yadav_/aws-storage-performance-costs-more-than-you-think-a-real-comparison-with-civo-228n</guid>
      <description>&lt;p&gt;I didn’t expect storage performance to be the most expensive part of my AWS bill.&lt;/p&gt;

&lt;p&gt;But it was.&lt;/p&gt;

&lt;p&gt;When people talk about cloud costs, they usually focus on compute.&lt;br&gt;
But in many real-world workloads, storage performance is where costs quietly increase.&lt;/p&gt;

&lt;p&gt;And the surprising part?&lt;br&gt;
You don’t just pay for storage in AWS — you pay for how fast it is.&lt;/p&gt;

&lt;p&gt;What’s happening in AWS&lt;br&gt;
You choose an EBS volume type (gp3, io1, io2, etc.)&lt;br&gt;
Then you pay separately for IOPS and throughput&lt;/p&gt;

&lt;p&gt;This means:&lt;br&gt;
The faster your storage needs to be, the more you pay.&lt;/p&gt;

&lt;p&gt;In some cases, storage performance can cost more than the compute itself.&lt;/p&gt;

&lt;p&gt;How Civo approaches it&lt;br&gt;
NVMe-based storage included by default&lt;br&gt;
No separate performance charges&lt;br&gt;
Consistent disk speed&lt;br&gt;
A simple way to think about it&lt;/p&gt;

&lt;p&gt;AWS: Storage is cheap, performance is not&lt;br&gt;
Civo: Performance comes built-in&lt;/p&gt;

&lt;p&gt;Why this matters&lt;/p&gt;

&lt;p&gt;If you're running Kubernetes, databases, or anything disk-heavy,&lt;br&gt;
this is where your cloud bill silently grows.&lt;/p&gt;

&lt;p&gt;Full breakdown&lt;/p&gt;

&lt;p&gt;I broke this down with a real comparison here:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.kubeblogs.com/aws-charges-extra-for-storage-performance-that-civo-ships-by-default/" rel="noopener noreferrer"&gt;https://www.kubeblogs.com/aws-charges-extra-for-storage-performance-that-civo-ships-by-default/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Have you ever been surprised by AWS billing, especially due to storage or IOPS?&lt;/p&gt;

</description>
      <category>aws</category>
      <category>devops</category>
      <category>cloud</category>
      <category>kubernetes</category>
    </item>
    <item>
      <title>Why Your GKE Load Balancer Fails After 30 Seconds (Real Fix)</title>
      <dc:creator>sanjay yadav</dc:creator>
      <pubDate>Tue, 28 Apr 2026 10:12:08 +0000</pubDate>
      <link>https://forem.com/sanjay_yadav_/why-your-gke-load-balancer-fails-after-30-seconds-real-fix-1j5g</link>
      <guid>https://forem.com/sanjay_yadav_/why-your-gke-load-balancer-fails-after-30-seconds-real-fix-1j5g</guid>
      <description>&lt;p&gt;Still getting 504 errors in GKE even when everything looks fine?&lt;/p&gt;

&lt;p&gt;We faced the same issue. Pods were healthy, APIs were responding, and there were no errors in logs.&lt;/p&gt;

&lt;p&gt;The real problem wasn’t the application. It was a 30-second timeout.&lt;/p&gt;




&lt;p&gt;We spent hours debugging this.&lt;/p&gt;

&lt;p&gt;Everything looked normal:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Pods were running fine&lt;/li&gt;
&lt;li&gt;APIs were responding&lt;/li&gt;
&lt;li&gt;No crashes or error logs&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;But still, requests kept failing.&lt;/p&gt;

&lt;p&gt;We were seeing random 503 and 504 errors.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Confusing Part
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Short requests worked perfectly&lt;/li&gt;
&lt;li&gt;Logs showed no issues&lt;/li&gt;
&lt;li&gt;System appeared healthy&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;But anything taking more than around 30 seconds failed every time.&lt;/p&gt;




&lt;h2&gt;
  
  
  What We Thought Initially
&lt;/h2&gt;

&lt;p&gt;At first, we assumed:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Application bug&lt;/li&gt;
&lt;li&gt;Database latency&lt;/li&gt;
&lt;li&gt;Resource limits&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;We checked everything. Nothing was wrong.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Real Cause
&lt;/h2&gt;

&lt;p&gt;After digging deeper, we found the actual issue:&lt;/p&gt;

&lt;p&gt;GKE Load Balancer has a default timeout of 30 seconds.&lt;/p&gt;

&lt;p&gt;This means:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Your backend may still be processing&lt;/li&gt;
&lt;li&gt;But the load balancer stops waiting&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Result: 504 Gateway Timeout.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Fix: BackendConfig
&lt;/h2&gt;

&lt;p&gt;We fixed it using BackendConfig.&lt;/p&gt;

&lt;p&gt;It allows you to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Increase timeout&lt;/li&gt;
&lt;li&gt;Customize health checks&lt;/li&gt;
&lt;li&gt;Control load balancer behavior&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  What We Changed
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;cloud.google.com/v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;BackendConfig&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;my-backend-config&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;timeoutSec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;60&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This change increased the timeout from 30 seconds to 60 seconds.&lt;/p&gt;

&lt;p&gt;After applying it, the issue was resolved.&lt;/p&gt;




&lt;h2&gt;
  
  
  Attach It to Your Service
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;annotations&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;cloud.google.com/backend-config&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;{"ports":&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;{"8003":"my-backend-config"}}'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Apply the configuration and redeploy your service.&lt;/p&gt;




&lt;h2&gt;
  
  
  Important Note
&lt;/h2&gt;

&lt;p&gt;If you don’t configure BackendConfig:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Default timeout remains 30 seconds&lt;/li&gt;
&lt;li&gt;Long-running requests will fail&lt;/li&gt;
&lt;li&gt;You have limited control over behavior&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Bonus Tip
&lt;/h2&gt;

&lt;p&gt;If your application regularly takes longer to respond:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Avoid increasing timeout too much&lt;/li&gt;
&lt;li&gt;Consider using asynchronous processing (queues or background jobs)&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Final Thought
&lt;/h2&gt;

&lt;p&gt;This was not a complex bug.&lt;/p&gt;

&lt;p&gt;It was a default setting that we were not aware of.&lt;/p&gt;

&lt;p&gt;Sometimes the real issue is not in your code, but in infrastructure defaults.&lt;/p&gt;




&lt;p&gt;&lt;a href="https://www.kubeblogs.com/fixing-504-errors-in-gke-load-balancer-how-backendconfig-solved-our-30-second-timeout-problem/" rel="noopener noreferrer"&gt;Full step-by-step guide&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you work with GKE, this can save you a lot of time.&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>gcp</category>
      <category>devops</category>
      <category>googlecloud</category>
    </item>
    <item>
      <title>Stop Using T2 Instances — They Cost More Than You Think (T2 vs T3)</title>
      <dc:creator>sanjay yadav</dc:creator>
      <pubDate>Tue, 28 Apr 2026 09:43:27 +0000</pubDate>
      <link>https://forem.com/sanjay_yadav_/stop-using-t2-instances-they-cost-more-than-you-think-t2-vs-t3-4kcp</link>
      <guid>https://forem.com/sanjay_yadav_/stop-using-t2-instances-they-cost-more-than-you-think-t2-vs-t3-4kcp</guid>
      <description>&lt;h1&gt;
  
  
  Stop Using T2 Instances — They’re Quietly Increasing Your AWS Bill
&lt;/h1&gt;

&lt;p&gt;I used to think T2 instances were the cheapest option on AWS.&lt;/p&gt;

&lt;p&gt;They look affordable, they’re everywhere in tutorials, and honestly — most of us just start with them.&lt;/p&gt;

&lt;p&gt;But after checking a few AWS bills closely, I realized something wasn’t adding up.&lt;/p&gt;




&lt;h2&gt;
  
  
  🤔 What’s Actually Going On with T2?
&lt;/h2&gt;

&lt;p&gt;T2 instances run on a CPU credit system.&lt;/p&gt;

&lt;p&gt;In simple terms:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;When your app is idle → you earn credits
&lt;/li&gt;
&lt;li&gt;When CPU usage increases → you spend credits
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;At first, this seems like a smart system.&lt;/p&gt;

&lt;p&gt;But here’s the part most people miss 👇&lt;/p&gt;




&lt;h2&gt;
  
  
  Where the Extra Cost Comes From
&lt;/h2&gt;

&lt;p&gt;If your instance runs out of CPU credits, AWS doesn’t just slow things down.&lt;/p&gt;

&lt;p&gt;If &lt;strong&gt;unlimited mode&lt;/strong&gt; is enabled (which it often is by default), you start getting charged for extra CPU usage.&lt;/p&gt;

&lt;p&gt;No clear warning. No obvious alert.&lt;/p&gt;

&lt;p&gt;Just a slightly higher bill at the end of the month.&lt;/p&gt;

&lt;p&gt;That’s exactly what happened to me.&lt;/p&gt;




&lt;h2&gt;
  
  
  Why I Switched to T3
&lt;/h2&gt;

&lt;p&gt;T3 instances are basically the improved version of T2.&lt;/p&gt;

&lt;p&gt;After switching, I noticed:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;More consistent performance
&lt;/li&gt;
&lt;li&gt;Fewer surprises in billing
&lt;/li&gt;
&lt;li&gt;Better overall value
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;They use newer hardware and handle CPU usage more efficiently.&lt;/p&gt;




&lt;h2&gt;
  
  
  T2 vs T3 (Simple View)
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Feature&lt;/th&gt;
&lt;th&gt;T2&lt;/th&gt;
&lt;th&gt;T3&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;CPU Handling&lt;/td&gt;
&lt;td&gt;Credit-based&lt;/td&gt;
&lt;td&gt;Smarter credits&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Performance&lt;/td&gt;
&lt;td&gt;Can drop&lt;/td&gt;
&lt;td&gt;More stable&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Pricing&lt;/td&gt;
&lt;td&gt;Can spike&lt;/td&gt;
&lt;td&gt;More predictable&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Generation&lt;/td&gt;
&lt;td&gt;Older&lt;/td&gt;
&lt;td&gt;Newer&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;




&lt;h2&gt;
  
  
  When You Should Avoid T2
&lt;/h2&gt;

&lt;p&gt;From experience, avoid T2 if:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Your app has traffic spikes
&lt;/li&gt;
&lt;li&gt;You're running anything close to production
&lt;/li&gt;
&lt;li&gt;You don’t want unpredictable costs
&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Quick Tip
&lt;/h2&gt;

&lt;p&gt;If you're currently using T2, check this:&lt;/p&gt;

&lt;p&gt;Go to your billing dashboard → look for &lt;strong&gt;CPU credit charges&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;You might already be paying more than expected.&lt;/p&gt;

&lt;h2&gt;
  
  
  Final Thought
&lt;/h2&gt;

&lt;p&gt;T2 isn’t “bad” — it’s just outdated for most real use cases.&lt;/p&gt;

&lt;p&gt;T3 is usually the safer and smarter choice now.&lt;/p&gt;

&lt;p&gt;🔗 Full article: [&lt;a href="https://www.kubeblogs.com/why-t3-is-better-than-t2-for-most-aws-ec2-workloads/" rel="noopener noreferrer"&gt;https://www.kubeblogs.com/why-t3-is-better-than-t2-for-most-aws-ec2-workloads/&lt;/a&gt;]&lt;br&gt;
If you're into simple, practical DevOps tips, follow along 👍&lt;/p&gt;

</description>
      <category>aws</category>
      <category>devops</category>
      <category>beginners</category>
    </item>
  </channel>
</rss>
