<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: sanjay yadav</title>
    <description>The latest articles on Forem by sanjay yadav (@sanjay_yadav_df9aa9af10ef).</description>
    <link>https://forem.com/sanjay_yadav_df9aa9af10ef</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/sanjay_yadav_df9aa9af10ef"/>
    <language>en</language>
    <item>
      <title>Most AWS Security Setups Ignore This One Outbound Risk</title>
      <dc:creator>sanjay yadav</dc:creator>
      <pubDate>Mon, 04 May 2026 05:43:41 +0000</pubDate>
      <link>https://forem.com/sanjay_yadav_df9aa9af10ef/youre-securing-inbound-traffic-in-aws-but-what-about-outbound-3bli</link>
      <guid>https://forem.com/sanjay_yadav_df9aa9af10ef/youre-securing-inbound-traffic-in-aws-but-what-about-outbound-3bli</guid>
      <description>&lt;p&gt;Most AWS security setups focus heavily on inbound traffic.&lt;/p&gt;

&lt;p&gt;But outbound is often left open.&lt;/p&gt;

&lt;p&gt;Security Groups. NACLs. Maybe WAF.&lt;br&gt;
That’s usually where the effort goes.&lt;/p&gt;

&lt;p&gt;But outbound traffic often gets far less attention — and that’s where problems begin.&lt;/p&gt;

&lt;p&gt;Every outbound request starts with a DNS query.&lt;/p&gt;

&lt;p&gt;Before your application connects anywhere, it first resolves a domain name. That step is easy to ignore, but it’s where a lot of risk begins.&lt;/p&gt;

&lt;p&gt;If something inside your VPC reaches a malicious domain, the communication already starts at the DNS level.&lt;/p&gt;

&lt;p&gt;This is where DNS-level control starts to matter.&lt;/p&gt;

&lt;p&gt;Route 53 DNS Firewall gives you control before traffic even reaches an IP.&lt;/p&gt;

&lt;p&gt;You can:&lt;/p&gt;

&lt;p&gt;Allow trusted domains&lt;br&gt;
Block known malicious domains&lt;br&gt;
Monitor suspicious queries&lt;/p&gt;

&lt;p&gt;What’s often overlooked is where this control actually sits.&lt;/p&gt;

&lt;p&gt;It operates within the VPC resolver path, separate from Security Groups and NACLs.&lt;br&gt;
So it doesn’t replace them — it fills a gap they don’t cover.&lt;/p&gt;

&lt;p&gt;It’s a small addition, but it changes how you think about egress security.&lt;/p&gt;

&lt;p&gt;Instead of reacting later, you control traffic at the first step.&lt;/p&gt;

&lt;p&gt;This article explains the setup and real use cases clearly:&lt;br&gt;
&lt;a href="https://www.kubeblogs.com/route-53-dns-firewall-aws-egress-security/" rel="noopener noreferrer"&gt;https://www.kubeblogs.com/route-53-dns-firewall-aws-egress-security/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;How are you handling egress security today — only with Security Groups, or adding DNS-level controls as well?&lt;/p&gt;

</description>
      <category>aws</category>
      <category>devops</category>
      <category>cloud</category>
      <category>networking</category>
    </item>
    <item>
      <title>Run Multiple AI Agents Locally Without Docker (Using Git Worktrees)</title>
      <dc:creator>sanjay yadav</dc:creator>
      <pubDate>Fri, 01 May 2026 05:54:21 +0000</pubDate>
      <link>https://forem.com/sanjay_yadav_df9aa9af10ef/running-parallel-ai-agents-locally-using-git-worktrees-a-practical-setup-105o</link>
      <guid>https://forem.com/sanjay_yadav_df9aa9af10ef/running-parallel-ai-agents-locally-using-git-worktrees-a-practical-setup-105o</guid>
      <description>&lt;p&gt;Running multiple AI agents locally sounds simple — until you actually try to manage them.&lt;/p&gt;

&lt;p&gt;Each agent needs its own context, its own branch, and often its own environment. The obvious approach is to duplicate repositories or constantly switch branches, but that quickly becomes difficult to manage.&lt;/p&gt;

&lt;p&gt;I ran into this while experimenting with parallel workflows, and the setup started breaking down faster than expected.&lt;/p&gt;

&lt;p&gt;The approach that worked well for me was using Git worktrees.&lt;/p&gt;

&lt;p&gt;Instead of cloning the same repository multiple times, worktrees allow you to create separate working directories from a single repository. Each one can operate independently with its own branch, which makes it much easier to run parallel processes.&lt;/p&gt;

&lt;p&gt;In practice, this helps with:&lt;/p&gt;

&lt;p&gt;Running multiple agents at the same time without conflicts&lt;br&gt;
Keeping environments isolated without duplicating repos&lt;br&gt;
Switching between contexts without disrupting ongoing work&lt;/p&gt;

&lt;p&gt;It’s a simple idea, but it makes a noticeable difference when working with parallel AI workflows locally.&lt;/p&gt;

&lt;p&gt;I’ve put together a step-by-step setup with commands and a working example here:&lt;br&gt;
&lt;a href="https://www.kubeblogs.com/run-parallel-ai-agents-git-worktrees-local-setup/" rel="noopener noreferrer"&gt;https://www.kubeblogs.com/run-parallel-ai-agents-git-worktrees-local-setup/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;How are you currently managing multiple AI workflows locally?&lt;/p&gt;

</description>
      <category>ai</category>
      <category>productivity</category>
      <category>git</category>
      <category>devops</category>
    </item>
    <item>
      <title>Why GCP Prometheus Doesn’t Work with Grafana (And How to Fix It)</title>
      <dc:creator>sanjay yadav</dc:creator>
      <pubDate>Thu, 30 Apr 2026 04:46:10 +0000</pubDate>
      <link>https://forem.com/sanjay_yadav_df9aa9af10ef/connecting-gcp-managed-prometheus-to-grafana-isnt-as-straightforward-as-it-looks-4i</link>
      <guid>https://forem.com/sanjay_yadav_df9aa9af10ef/connecting-gcp-managed-prometheus-to-grafana-isnt-as-straightforward-as-it-looks-4i</guid>
      <description>&lt;p&gt;I thought connecting Grafana to GCP Managed Prometheus would take 10 minutes.&lt;/p&gt;

&lt;p&gt;It didn’t.&lt;/p&gt;

&lt;p&gt;With a normal Prometheus setup, you just point Grafana to an endpoint and start querying metrics. But GCP Managed Prometheus works differently, and that’s where things get confusing.&lt;/p&gt;

&lt;p&gt;Grafana was connected, but no metrics were showing — which made debugging frustrating.&lt;/p&gt;

&lt;p&gt;Instead of a direct connection, you’re dealing with Google Cloud APIs, IAM permissions, and authentication layers. Small misconfigurations can break everything.&lt;/p&gt;

&lt;p&gt;After digging into it, I realized most issues come down to authentication and endpoint configuration.&lt;/p&gt;

&lt;p&gt;To simplify things, I set up Grafana using Docker Compose and worked through it step by step.&lt;/p&gt;

&lt;p&gt;This approach helped me:&lt;/p&gt;

&lt;p&gt;Run Grafana in a controlled environment&lt;br&gt;
Fix authentication issues step by step&lt;br&gt;
Confirm whether metrics were actually being returned&lt;/p&gt;

&lt;p&gt;Once everything was aligned, the setup worked as expected.&lt;/p&gt;

&lt;p&gt;If you’re working with GKE, Kubernetes monitoring, or managed observability on GCP, this is something you’ll likely run into.&lt;/p&gt;

&lt;p&gt;I documented the full setup with a working example and exact configuration here:&lt;br&gt;
&lt;a href="https://www.kubeblogs.com/how-to-connect-google-cloud-managed-prometheus-to-grafana-using-docker-compose-2025-setup-guide/" rel="noopener noreferrer"&gt;https://www.kubeblogs.com/how-to-connect-google-cloud-managed-prometheus-to-grafana-using-docker-compose-2025-setup-guide/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;What part took you the most time — authentication, configuration, or getting metrics to show up?&lt;/p&gt;

</description>
      <category>prometheus</category>
      <category>kubernetes</category>
      <category>gcp</category>
      <category>grafana</category>
    </item>
    <item>
      <title>AWS Storage Performance Costs More Than You Think (A Real Comparison with Civo)</title>
      <dc:creator>sanjay yadav</dc:creator>
      <pubDate>Wed, 29 Apr 2026 09:34:29 +0000</pubDate>
      <link>https://forem.com/sanjay_yadav_df9aa9af10ef/aws-storage-performance-costs-more-than-you-think-a-real-comparison-with-civo-228n</link>
      <guid>https://forem.com/sanjay_yadav_df9aa9af10ef/aws-storage-performance-costs-more-than-you-think-a-real-comparison-with-civo-228n</guid>
      <description>&lt;p&gt;I didn’t expect storage performance to be the most expensive part of my AWS bill.&lt;/p&gt;

&lt;p&gt;But it was.&lt;/p&gt;

&lt;p&gt;When people talk about cloud costs, they usually focus on compute.&lt;br&gt;
But in many real-world workloads, storage performance is where costs quietly increase.&lt;/p&gt;

&lt;p&gt;And the surprising part?&lt;br&gt;
You don’t just pay for storage in AWS — you pay for how fast it is.&lt;/p&gt;

&lt;p&gt;What’s happening in AWS&lt;br&gt;
You choose an EBS volume type (gp3, io1, io2, etc.)&lt;br&gt;
Then you pay separately for IOPS and throughput&lt;/p&gt;

&lt;p&gt;This means:&lt;br&gt;
The faster your storage needs to be, the more you pay.&lt;/p&gt;

&lt;p&gt;In some cases, storage performance can cost more than the compute itself.&lt;/p&gt;

&lt;p&gt;How Civo approaches it&lt;br&gt;
NVMe-based storage included by default&lt;br&gt;
No separate performance charges&lt;br&gt;
Consistent disk speed&lt;br&gt;
A simple way to think about it&lt;/p&gt;

&lt;p&gt;AWS: Storage is cheap, performance is not&lt;br&gt;
Civo: Performance comes built-in&lt;/p&gt;

&lt;p&gt;Why this matters&lt;/p&gt;

&lt;p&gt;If you're running Kubernetes, databases, or anything disk-heavy,&lt;br&gt;
this is where your cloud bill silently grows.&lt;/p&gt;

&lt;p&gt;Full breakdown&lt;/p&gt;

&lt;p&gt;I broke this down with a real comparison here:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.kubeblogs.com/aws-charges-extra-for-storage-performance-that-civo-ships-by-default/" rel="noopener noreferrer"&gt;https://www.kubeblogs.com/aws-charges-extra-for-storage-performance-that-civo-ships-by-default/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Have you ever been surprised by AWS billing, especially due to storage or IOPS?&lt;/p&gt;

</description>
      <category>aws</category>
      <category>devops</category>
      <category>cloud</category>
      <category>kubernetes</category>
    </item>
    <item>
      <title>Why Your GKE Load Balancer Fails After 30 Seconds (Real Fix)</title>
      <dc:creator>sanjay yadav</dc:creator>
      <pubDate>Tue, 28 Apr 2026 10:12:08 +0000</pubDate>
      <link>https://forem.com/sanjay_yadav_df9aa9af10ef/why-your-gke-load-balancer-fails-after-30-seconds-real-fix-1j5g</link>
      <guid>https://forem.com/sanjay_yadav_df9aa9af10ef/why-your-gke-load-balancer-fails-after-30-seconds-real-fix-1j5g</guid>
      <description>&lt;p&gt;Still getting 504 errors in GKE even when everything looks fine?&lt;/p&gt;

&lt;p&gt;We faced the same issue. Pods were healthy, APIs were responding, and there were no errors in logs.&lt;/p&gt;

&lt;p&gt;The real problem wasn’t the application. It was a 30-second timeout.&lt;/p&gt;




&lt;p&gt;We spent hours debugging this.&lt;/p&gt;

&lt;p&gt;Everything looked normal:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Pods were running fine&lt;/li&gt;
&lt;li&gt;APIs were responding&lt;/li&gt;
&lt;li&gt;No crashes or error logs&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;But still, requests kept failing.&lt;/p&gt;

&lt;p&gt;We were seeing random 503 and 504 errors.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Confusing Part
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Short requests worked perfectly&lt;/li&gt;
&lt;li&gt;Logs showed no issues&lt;/li&gt;
&lt;li&gt;System appeared healthy&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;But anything taking more than around 30 seconds failed every time.&lt;/p&gt;




&lt;h2&gt;
  
  
  What We Thought Initially
&lt;/h2&gt;

&lt;p&gt;At first, we assumed:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Application bug&lt;/li&gt;
&lt;li&gt;Database latency&lt;/li&gt;
&lt;li&gt;Resource limits&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;We checked everything. Nothing was wrong.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Real Cause
&lt;/h2&gt;

&lt;p&gt;After digging deeper, we found the actual issue:&lt;/p&gt;

&lt;p&gt;GKE Load Balancer has a default timeout of 30 seconds.&lt;/p&gt;

&lt;p&gt;This means:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Your backend may still be processing&lt;/li&gt;
&lt;li&gt;But the load balancer stops waiting&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Result: 504 Gateway Timeout.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Fix: BackendConfig
&lt;/h2&gt;

&lt;p&gt;We fixed it using BackendConfig.&lt;/p&gt;

&lt;p&gt;It allows you to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Increase timeout&lt;/li&gt;
&lt;li&gt;Customize health checks&lt;/li&gt;
&lt;li&gt;Control load balancer behavior&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  What We Changed
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;cloud.google.com/v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;BackendConfig&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;my-backend-config&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;timeoutSec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;60&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This change increased the timeout from 30 seconds to 60 seconds.&lt;/p&gt;

&lt;p&gt;After applying it, the issue was resolved.&lt;/p&gt;




&lt;h2&gt;
  
  
  Attach It to Your Service
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;annotations&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;cloud.google.com/backend-config&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;{"ports":&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;{"8003":"my-backend-config"}}'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Apply the configuration and redeploy your service.&lt;/p&gt;




&lt;h2&gt;
  
  
  Important Note
&lt;/h2&gt;

&lt;p&gt;If you don’t configure BackendConfig:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Default timeout remains 30 seconds&lt;/li&gt;
&lt;li&gt;Long-running requests will fail&lt;/li&gt;
&lt;li&gt;You have limited control over behavior&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Bonus Tip
&lt;/h2&gt;

&lt;p&gt;If your application regularly takes longer to respond:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Avoid increasing timeout too much&lt;/li&gt;
&lt;li&gt;Consider using asynchronous processing (queues or background jobs)&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Final Thought
&lt;/h2&gt;

&lt;p&gt;This was not a complex bug.&lt;/p&gt;

&lt;p&gt;It was a default setting that we were not aware of.&lt;/p&gt;

&lt;p&gt;Sometimes the real issue is not in your code, but in infrastructure defaults.&lt;/p&gt;




&lt;p&gt;&lt;a href="https://www.kubeblogs.com/fixing-504-errors-in-gke-load-balancer-how-backendconfig-solved-our-30-second-timeout-problem/" rel="noopener noreferrer"&gt;Full step-by-step guide&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you work with GKE, this can save you a lot of time.&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>gcp</category>
      <category>devops</category>
      <category>googlecloud</category>
    </item>
    <item>
      <title>Stop Using T2 Instances — They Cost More Than You Think (T2 vs T3)</title>
      <dc:creator>sanjay yadav</dc:creator>
      <pubDate>Tue, 28 Apr 2026 09:43:27 +0000</pubDate>
      <link>https://forem.com/sanjay_yadav_df9aa9af10ef/stop-using-t2-instances-they-cost-more-than-you-think-t2-vs-t3-4kcp</link>
      <guid>https://forem.com/sanjay_yadav_df9aa9af10ef/stop-using-t2-instances-they-cost-more-than-you-think-t2-vs-t3-4kcp</guid>
      <description>&lt;h1&gt;
  
  
  Stop Using T2 Instances — They’re Quietly Increasing Your AWS Bill
&lt;/h1&gt;

&lt;p&gt;I used to think T2 instances were the cheapest option on AWS.&lt;/p&gt;

&lt;p&gt;They look affordable, they’re everywhere in tutorials, and honestly — most of us just start with them.&lt;/p&gt;

&lt;p&gt;But after checking a few AWS bills closely, I realized something wasn’t adding up.&lt;/p&gt;




&lt;h2&gt;
  
  
  🤔 What’s Actually Going On with T2?
&lt;/h2&gt;

&lt;p&gt;T2 instances run on a CPU credit system.&lt;/p&gt;

&lt;p&gt;In simple terms:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;When your app is idle → you earn credits
&lt;/li&gt;
&lt;li&gt;When CPU usage increases → you spend credits
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;At first, this seems like a smart system.&lt;/p&gt;

&lt;p&gt;But here’s the part most people miss 👇&lt;/p&gt;




&lt;h2&gt;
  
  
  Where the Extra Cost Comes From
&lt;/h2&gt;

&lt;p&gt;If your instance runs out of CPU credits, AWS doesn’t just slow things down.&lt;/p&gt;

&lt;p&gt;If &lt;strong&gt;unlimited mode&lt;/strong&gt; is enabled (which it often is by default), you start getting charged for extra CPU usage.&lt;/p&gt;

&lt;p&gt;No clear warning. No obvious alert.&lt;/p&gt;

&lt;p&gt;Just a slightly higher bill at the end of the month.&lt;/p&gt;

&lt;p&gt;That’s exactly what happened to me.&lt;/p&gt;




&lt;h2&gt;
  
  
  Why I Switched to T3
&lt;/h2&gt;

&lt;p&gt;T3 instances are basically the improved version of T2.&lt;/p&gt;

&lt;p&gt;After switching, I noticed:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;More consistent performance
&lt;/li&gt;
&lt;li&gt;Fewer surprises in billing
&lt;/li&gt;
&lt;li&gt;Better overall value
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;They use newer hardware and handle CPU usage more efficiently.&lt;/p&gt;




&lt;h2&gt;
  
  
  T2 vs T3 (Simple View)
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Feature&lt;/th&gt;
&lt;th&gt;T2&lt;/th&gt;
&lt;th&gt;T3&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;CPU Handling&lt;/td&gt;
&lt;td&gt;Credit-based&lt;/td&gt;
&lt;td&gt;Smarter credits&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Performance&lt;/td&gt;
&lt;td&gt;Can drop&lt;/td&gt;
&lt;td&gt;More stable&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Pricing&lt;/td&gt;
&lt;td&gt;Can spike&lt;/td&gt;
&lt;td&gt;More predictable&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Generation&lt;/td&gt;
&lt;td&gt;Older&lt;/td&gt;
&lt;td&gt;Newer&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;




&lt;h2&gt;
  
  
  When You Should Avoid T2
&lt;/h2&gt;

&lt;p&gt;From experience, avoid T2 if:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Your app has traffic spikes
&lt;/li&gt;
&lt;li&gt;You're running anything close to production
&lt;/li&gt;
&lt;li&gt;You don’t want unpredictable costs
&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Quick Tip
&lt;/h2&gt;

&lt;p&gt;If you're currently using T2, check this:&lt;/p&gt;

&lt;p&gt;Go to your billing dashboard → look for &lt;strong&gt;CPU credit charges&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;You might already be paying more than expected.&lt;/p&gt;

&lt;h2&gt;
  
  
  Final Thought
&lt;/h2&gt;

&lt;p&gt;T2 isn’t “bad” — it’s just outdated for most real use cases.&lt;/p&gt;

&lt;p&gt;T3 is usually the safer and smarter choice now.&lt;/p&gt;

&lt;p&gt;🔗 Full article: [&lt;a href="https://www.kubeblogs.com/why-t3-is-better-than-t2-for-most-aws-ec2-workloads/" rel="noopener noreferrer"&gt;https://www.kubeblogs.com/why-t3-is-better-than-t2-for-most-aws-ec2-workloads/&lt;/a&gt;]&lt;br&gt;
If you're into simple, practical DevOps tips, follow along 👍&lt;/p&gt;

</description>
      <category>aws</category>
      <category>devops</category>
      <category>beginners</category>
    </item>
  </channel>
</rss>
