<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Mohammad Shoeb</title>
    <description>The latest articles on Forem by Mohammad Shoeb (@mohammad_shoeb_8cf8645287).</description>
    <link>https://forem.com/mohammad_shoeb_8cf8645287</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/mohammad_shoeb_8cf8645287"/>
    <language>en</language>
    <item>
      <title>The C# Feature That Saves You Thousands… But No One Talks About</title>
      <dc:creator>Mohammad Shoeb</dc:creator>
      <pubDate>Wed, 27 Aug 2025 10:57:05 +0000</pubDate>
      <link>https://forem.com/mohammad_shoeb_8cf8645287/the-c-feature-that-saves-you-thousands-but-no-one-talks-about-10d4</link>
      <guid>https://forem.com/mohammad_shoeb_8cf8645287/the-c-feature-that-saves-you-thousands-but-no-one-talks-about-10d4</guid>
      <description>&lt;h2&gt;
  
  
  Why This Blog Matters
&lt;/h2&gt;

&lt;p&gt;Most cloud overruns don’t come from AI models or Kubernetes complexity.&lt;br&gt;&lt;br&gt;
They come from &lt;strong&gt;small inefficiencies in your .NET code&lt;/strong&gt; — the kind of hidden multipliers that chew up CPU, memory, and dollars every day.&lt;/p&gt;

&lt;p&gt;Microsoft has quietly shipped features that don’t get keynote spotlight but can decide whether your &lt;strong&gt;App Service scales down&lt;/strong&gt; or your &lt;strong&gt;CFO calls about a $30K overage&lt;/strong&gt;.  &lt;/p&gt;

&lt;p&gt;This post highlights one such overlooked gem: &lt;strong&gt;Frozen Collections in .NET 8&lt;/strong&gt;.&lt;/p&gt;
&lt;h2&gt;
  
  
  The Hidden Hero: Frozen Collections
&lt;/h2&gt;

&lt;p&gt;Dictionaries and hash sets are staples of .NET. But at scale, they carry a hidden tax:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;GC churn from repeated allocations
&lt;/li&gt;
&lt;li&gt;CPU wasted re-hashing keys on every lookup
&lt;/li&gt;
&lt;li&gt;App Service scaling triggered by inefficient memory pressure
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Frozen Collections&lt;/strong&gt; (&lt;a href="https://learn.microsoft.com/en-us/dotnet/api/system.collections.frozen" rel="noopener noreferrer"&gt;Microsoft Docs&lt;/a&gt;) flip the model:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Immutable&lt;/strong&gt; → once built, they never change (thread-safe by default)
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Pre-hashed&lt;/strong&gt; → expensive work is done once at build time
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Throughput wins&lt;/strong&gt; → Microsoft benchmarks show &lt;code&gt;FrozenDictionary&lt;/code&gt; achieves ~1.9 ns/lookup vs ~2.5 ns for &lt;code&gt;Dictionary&amp;lt;TKey,TValue&amp;gt;&lt;/code&gt; (~25% faster steady-state)
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Memory efficiency&lt;/strong&gt; → lower footprint, fewer GC pauses
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;👉 Think of it like password hashing. &lt;code&gt;Dictionary&lt;/code&gt; re-hashes every login. &lt;code&gt;FrozenDictionary&lt;/code&gt; hashes once and reuses forever.&lt;/p&gt;


&lt;h2&gt;
  
  
  Real-World Case: Cosmos DB Partition Routing
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Workload Context:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;~12K requests/sec
&lt;/li&gt;
&lt;li&gt;~10K partition keys (static)
&lt;/li&gt;
&lt;li&gt;Routing service hosted on &lt;strong&gt;Azure App Service P1v3 (2 cores)&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Partition map rebuilt once at startup
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Observed Behavior (Before):&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
With &lt;code&gt;Dictionary&amp;lt;string,string&amp;gt;&lt;/code&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;CPU averaged &lt;strong&gt;85%&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;p99 latency ~&lt;strong&gt;180 ms&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Service scaled from 4 → 6 instances
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Cost impact&lt;/strong&gt;: +2 × P1v3 = &lt;strong&gt;$560/month = $6,720/year&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;After Swap (FrozenDictionary):&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;private static readonly FrozenDictionary&amp;lt;string,string&amp;gt; PartitionMap =
routingList.ToFrozenDictionary(x =&amp;gt; x.Key, x =&amp;gt; x.Value);
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Results:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;💡 In this single service: &lt;strong&gt;$6,720/year saved&lt;/strong&gt;.
&lt;/li&gt;
&lt;li&gt;💡 Across five routing-heavy services with similar lookup patterns: &lt;strong&gt;~$33K/year saved&lt;/strong&gt;.
&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Profiling Evidence (dotnet-counters)
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Test setup:&lt;/strong&gt; 30-minute steady load at 12K RPS using k6, 10K partition keys.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;% Time in GC:   &lt;strong&gt;12.3% → 5.8%&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Alloc Rate:     &lt;strong&gt;350 MB/s → 120 MB/s&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;CPU Usage:      &lt;strong&gt;85% → 67%&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;👉 Lower &lt;strong&gt;% Time in GC&lt;/strong&gt; + lower allocation rate = fewer pauses and more CPU available for real work.&lt;/p&gt;




&lt;h2&gt;
  
  
  Microsoft Benchmark Data (Build vs Lookup Trade-Off)
&lt;/h2&gt;

&lt;p&gt;&lt;em&gt;Source: &lt;a href="https://devblogs.microsoft.com/dotnet/performance-improvements-in-net-8/" rel="noopener noreferrer"&gt;.NET 8 performance benchmarks&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Startup:&lt;/strong&gt; ~4× slower — but only paid once.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Lookups:&lt;/strong&gt; faster and cheaper forever.
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For static sets ≥10K, Frozen wins.&lt;br&gt;&lt;br&gt;
For small or frequently mutating sets, stick with &lt;code&gt;Dictionary&lt;/code&gt;.&lt;/p&gt;


&lt;h2&gt;
  
  
  Visuals (Conceptual)
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;App Service Scaling (Before vs After)&lt;/strong&gt;  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Dictionary: 6 Instances at peak load
&lt;/li&gt;
&lt;li&gt;FrozenDictionary: 4 Instances at peak load
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;GC Time Reduction&lt;/strong&gt;  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;From 12.3% → 5.8%
&lt;/li&gt;
&lt;/ul&gt;


&lt;h2&gt;
  
  
  When to Use (and When Not To)
&lt;/h2&gt;

&lt;p&gt;✅ &lt;strong&gt;Best suited for&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Configurations that don’t change after startup
&lt;/li&gt;
&lt;li&gt;Routing tables (e.g., Cosmos DB partition maps)
&lt;/li&gt;
&lt;li&gt;Feature flags / metadata lookups
&lt;/li&gt;
&lt;li&gt;Multi-threaded services (immutability removes locking overhead)
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;❌ &lt;strong&gt;Avoid if&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Collections need frequent updates
&lt;/li&gt;
&lt;li&gt;Dataset is tiny (startup overhead &amp;gt; benefit)
&lt;/li&gt;
&lt;li&gt;Startup latency is critical (cold path scenarios)
&lt;/li&gt;
&lt;/ul&gt;


&lt;h2&gt;
  
  
  The Bigger Lesson
&lt;/h2&gt;

&lt;p&gt;As engineers, we often jump to flashy fixes: caching layers, Redis tuning, Kubernetes autoscaling.  &lt;/p&gt;

&lt;p&gt;But sometimes, the &lt;strong&gt;highest ROI comes from the most boring one-liner in your codebase&lt;/strong&gt;.  &lt;/p&gt;

&lt;p&gt;Frozen Collections don’t just clean up code. They cut real dollars off your bill.&lt;/p&gt;


&lt;h2&gt;
  
  
  ⚡ Coming Next in This Series
&lt;/h2&gt;

&lt;p&gt;Frozen Collections are just one of Microsoft’s “quiet million-savers.”  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Part 2&lt;/strong&gt; — LoggerMessage Source Generator: How precompiled logging cut one team’s App Insights bill by 30%
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Part 3&lt;/strong&gt; — Cosmos DB Analytical Store TTL: How a retailer saved $3,000/month with one config
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Follow to catch the full series.&lt;/p&gt;


&lt;h2&gt;
  
  
  Takeaway
&lt;/h2&gt;

&lt;p&gt;If you’re running high-throughput .NET apps in 2025, ask yourself:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Am I paying CPU + GC tax for collections that never change?
&lt;/li&gt;
&lt;li&gt;Do I need to scale out, or can I scale smarter?
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Because sometimes the difference between burning money and saving it is just one line:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;myList.ToFrozenDictionary(x =&amp;gt; x.Id, x =&amp;gt; x.Value);
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It’s not glamorous — but it’s the kind of decision that compounds into thousands of dollars at scale.&lt;/p&gt;




&lt;h2&gt;
  
  
  Call to Action
&lt;/h2&gt;

&lt;p&gt;At Microsoft, we obsess over deleting the &lt;strong&gt;right line of code&lt;/strong&gt;.  &lt;/p&gt;

&lt;p&gt;This one deletion (of a &lt;code&gt;Dictionary&lt;/code&gt;) saved us &lt;strong&gt;$6,720/year in one service&lt;/strong&gt; — and &lt;strong&gt;~$33K/year across the fleet&lt;/strong&gt;.  &lt;/p&gt;

&lt;p&gt;👉 What’s the most expensive line of code you’ve ever deleted?&lt;br&gt;&lt;br&gt;
Drop it in the comments — I’ll feature the top 3 in &lt;strong&gt;Part 2&lt;/strong&gt; of this series.  &lt;/p&gt;

&lt;p&gt;In enterprise systems, efficiency isn’t optional — it compounds into millions at scale.&lt;br&gt;&lt;br&gt;
&lt;strong&gt;Frozen Collections&lt;/strong&gt; are one of those tools you overlook until you see the bill. Don’t wait for that moment.&lt;/p&gt;

</description>
      <category>dotnet</category>
      <category>csharp</category>
      <category>microsoft</category>
      <category>programming</category>
    </item>
    <item>
      <title>Azure Isn't Expensive — Your Code Is: 10 Proven Patterns That Cut Costs Fast</title>
      <dc:creator>Mohammad Shoeb</dc:creator>
      <pubDate>Wed, 27 Aug 2025 10:44:03 +0000</pubDate>
      <link>https://forem.com/mohammad_shoeb_8cf8645287/azure-isnt-expensive-your-code-is-10-proven-patterns-that-cut-costs-fast-495k</link>
      <guid>https://forem.com/mohammad_shoeb_8cf8645287/azure-isnt-expensive-your-code-is-10-proven-patterns-that-cut-costs-fast-495k</guid>
      <description>&lt;p&gt;🔥 Why This Blog Matters&lt;br&gt;
❌ A company I worked with once spent $7,400 in a single month…&lt;br&gt;
Just because an Azure Function was triggered around 8 million times — by stale messages in a forgotten queue.&lt;br&gt;
The devs didn’t even know it was still active.&lt;/p&gt;

&lt;p&gt;💡 You don’t need cheaper pricing tiers.&lt;br&gt;
You need smarter code.&lt;/p&gt;

&lt;p&gt;This post is your developer-first playbook.&lt;br&gt;
I’ll show you 10 real code patterns that:&lt;/p&gt;

&lt;p&gt;Slash compute, storage, and telemetry waste&lt;br&gt;
Are used inside Microsoft (Azure SDKs, Copilot infra, and durable workloads)&lt;br&gt;
Can be implemented within a couple of weeks&lt;br&gt;
Not a medium member? you can read this blog here.&lt;/p&gt;

&lt;p&gt;👻 Pattern 0: Cost Ghosts — What’s Billing You Silently&lt;br&gt;
“You can’t optimize what you don’t measure.” — Azure Advisor Team&lt;/p&gt;

&lt;p&gt;👎 Common Mistake:&lt;br&gt;
Ignoring silent costs from idle VMs, orphaned App Services, or chatty logs.&lt;/p&gt;

&lt;p&gt;✅ Fix:&lt;br&gt;
Start with diagnostics:&lt;/p&gt;

&lt;p&gt;Azure Cost Analysis → by tag or service&lt;br&gt;
Application Insights → filter high-traffic routes&lt;br&gt;
Azure Workbooks → cost-by-operation&lt;br&gt;
Azure Advisor → underutilized resources&lt;br&gt;
💡 Quick Win: Tag every resource by team + feature. Unused ones will light up.&lt;/p&gt;

&lt;p&gt;🧠 Pattern 1: Queue-First, Not Compute-First&lt;br&gt;
👎 Common Mistake:&lt;br&gt;
Triggering Functions or APIs directly per event — even when it could wait.&lt;/p&gt;

&lt;p&gt;✅ Fix:&lt;br&gt;
Use Storage Queues, Service Bus, or Event Hubs to buffer and batch.&lt;/p&gt;

&lt;p&gt;API → Queue → Azure Function (batch trigger)&lt;br&gt;
“Queues are cheaper than compute. Push work downstream.” — Azure Arch Center&lt;/p&gt;

&lt;p&gt;💡 Quick Win: Ingest telemetry → Event Hub → Storage Queue → Batch Function&lt;/p&gt;

&lt;p&gt;📉 Potential Savings: 30–60% reduction in over-scaled compute&lt;/p&gt;

&lt;p&gt;⏳ Pattern 2: Time-to-Live Everything&lt;br&gt;
👎 Common Mistake:&lt;br&gt;
Cosmos DB, queues, and logs live forever. You keep paying for ghost data.&lt;/p&gt;

&lt;p&gt;✅ Fix:&lt;br&gt;
Use TTL aggressively:&lt;/p&gt;

&lt;p&gt;Cosmos DB: defaultTimeToLive&lt;br&gt;
Service Bus: TimeToLive&lt;br&gt;
Blobs: lifecycle policies&lt;br&gt;
Durable Functions: auto-purge history&lt;br&gt;
“Use TTL to enforce data lifecycle and reduce cost.” — Cosmos TTL Docs&lt;/p&gt;

&lt;p&gt;💡 Quick Win: Set Service Bus TTL = 48 hours. Cosmos = 7 days.&lt;/p&gt;

&lt;p&gt;📉 Potential Savings: 10–25% on storage &amp;amp; telemetry&lt;/p&gt;

&lt;p&gt;📈 Pattern 3: Cap Concurrency — Don’t Over-AutoScale&lt;br&gt;
👎 Common Mistake:&lt;br&gt;
Relying on autoscaling alone leads to bill spikes under load.&lt;/p&gt;

&lt;p&gt;✅ Fix:&lt;br&gt;
Apply hard caps:&lt;/p&gt;

&lt;p&gt;Azure Function → maxConcurrentRequests&lt;br&gt;
Web APIs → SemaphoreSlim, ParallelOptions&lt;br&gt;
Polly → BulkheadPolicy&lt;br&gt;
“Concurrency limits prevent downstream exhaustion and surprise bills.” — Functions Scale Docs&lt;/p&gt;

&lt;p&gt;💡 Quick Win: Limit queue-triggered Functions to 5 concurrent instances.&lt;/p&gt;

&lt;p&gt;📉 Potential Savings: 30–50% compute reduction in spikes&lt;/p&gt;

&lt;p&gt;🧾 Pattern 4: Structured Logging + Sampling&lt;br&gt;
👎 Common Mistake:&lt;br&gt;
Logging every HTTP response at Information. Every. Single. Time.&lt;/p&gt;

&lt;p&gt;✅ Fix:&lt;br&gt;
Use:&lt;/p&gt;

&lt;p&gt;Structured logs:&lt;br&gt;
_logger.LogInformation("Order {Id} processed at {Time}", id, DateTime.UtcNow);&lt;br&gt;
Application Insights sampling&lt;br&gt;
Log filters in host.json&lt;br&gt;
“Use sampling to reduce ingestion, not insight.” — App Insights Docs&lt;/p&gt;

&lt;p&gt;💡 Quick Win: Use AdaptiveSamplingTelemetryProcessor. It auto-throttles verbosity.&lt;/p&gt;

&lt;p&gt;📉 Potential Savings: 40–70% on App Insights ingestion cost&lt;/p&gt;

&lt;p&gt;⏱️ Pattern 5: Use Durable Timers Instead of Waiting&lt;br&gt;
👎 Common Mistake:&lt;br&gt;
Using Task.Delay or polling loops — which bills you per second.&lt;/p&gt;

&lt;p&gt;✅ Fix:&lt;br&gt;
Use context.CreateTimer(...) in Durable Functions:&lt;/p&gt;

&lt;p&gt;await context.CreateTimer(DateTime.UtcNow.AddMinutes(15), CancellationToken.None);&lt;br&gt;
“Durable timers pause without active billing.” — Durable Functions Docs&lt;/p&gt;

&lt;p&gt;💡 Quick Win: Replace Thread.Sleep with CreateTimer in approval workflows.&lt;/p&gt;

&lt;p&gt;📉 Potential Savings: 50–90% on wait-heavy Functions&lt;/p&gt;

&lt;p&gt;📦 Pattern 6: Blob Tiering Isn’t Optional&lt;br&gt;
👎 Common Mistake:&lt;br&gt;
Leaving all blobs in Hot Tier — even audit logs and backups.&lt;/p&gt;

&lt;p&gt;✅ Fix:&lt;br&gt;
Use Lifecycle Management to move:&lt;/p&gt;

&lt;p&gt;Hot → Cool → Archive → Delete&lt;br&gt;
“Archive storage is up to 80% cheaper than Hot.” — Azure Storage Pricing&lt;/p&gt;

&lt;p&gt;💡 Quick Win: Set lifecycle policy:&lt;br&gt;
30 days → Cool&lt;br&gt;
90 days → Archive&lt;br&gt;
180 days → Delete&lt;/p&gt;

&lt;p&gt;📉 Potential Savings: 50–80% on storage cost&lt;/p&gt;

&lt;p&gt;🧲 Pattern 7: Trigger Filters Save You Real Money&lt;br&gt;
👎 Common Mistake:&lt;br&gt;
Function triggered by every Event Grid event — even irrelevant ones.&lt;/p&gt;

&lt;p&gt;✅ Fix:&lt;br&gt;
Use:&lt;/p&gt;

&lt;p&gt;Event Grid filters (eventType, subject)&lt;br&gt;
Service Bus SQL filters&lt;br&gt;
Cosmos DB Change Feed filters&lt;br&gt;
“Filter events at the source to reduce compute.” — Event Grid Filters&lt;/p&gt;

&lt;p&gt;💡 Quick Win: Trigger only when eventType == "InvoiceCreated"&lt;/p&gt;

&lt;p&gt;📉 Potential Savings: 20–40% reduction in Function invocations&lt;/p&gt;

&lt;p&gt;🔁 Pattern 8: Resilient Outbound Calls: Retry + Bulkhead&lt;br&gt;
👎 Common Mistake:&lt;br&gt;
Retrying failed APIs immediately → creates a retry storm.&lt;br&gt;
Or worse — calling external APIs at 1000 RPS when they handle 50.&lt;/p&gt;

&lt;p&gt;✅ Fix:&lt;br&gt;
Use:&lt;/p&gt;

&lt;p&gt;new BlobClientOptions&lt;br&gt;
{&lt;br&gt;
    Retry = {&lt;br&gt;
        MaxRetries = 5,&lt;br&gt;
        Mode = RetryMode.Exponential,&lt;br&gt;
        Delay = TimeSpan.FromSeconds(2)&lt;br&gt;
    }&lt;br&gt;
}&lt;br&gt;
And cap concurrency:&lt;/p&gt;

&lt;p&gt;await semaphore.WaitAsync();&lt;br&gt;
// API Call&lt;br&gt;
semaphore.Release();&lt;br&gt;
“Smart retries and bulkheads reduce downstream throttling and cost.” — Azure SDK + Cloud Patterns&lt;/p&gt;

&lt;p&gt;💡 Quick Win: Use Polly + SemaphoreSlim for any 3rd-party API call.&lt;/p&gt;

&lt;p&gt;📉 Potential Savings: Up to 50% reduction in failed/duplicated requests&lt;/p&gt;

&lt;p&gt;📊 Pattern 9: Track Cost per Route, Not Just Monthly Spend&lt;br&gt;
👎 Common Mistake:&lt;br&gt;
Reviewing cost by month or service — not by API route or feature.&lt;/p&gt;

&lt;p&gt;✅ Fix:&lt;br&gt;
Use App Insights operation_Name&lt;br&gt;
Correlate with Azure Cost Exports&lt;br&gt;
Build Workbooks to analyze cost per endpoint&lt;br&gt;
“Tags + correlation = true FinOps insight.” — Azure FinOps Docs&lt;/p&gt;

&lt;p&gt;💡 Quick Win: Tag by project, owner, feature. Then sort cost by those.&lt;/p&gt;

&lt;p&gt;📉 Potential Savings: Visibility = control = smarter decisions&lt;/p&gt;

&lt;p&gt;🎁 Bonus Pattern 10: Stop Paying for Premium SKUs You Don’t Need&lt;br&gt;
👎 Common Mistake:&lt;br&gt;
Using P1 Redis, Standard App Service, or AKS with 3-node pools for tiny workloads.&lt;/p&gt;

&lt;p&gt;✅ Fix:&lt;br&gt;
Audit your SKU usage:&lt;/p&gt;

&lt;p&gt;Redis: Use Basic unless you’re clustering&lt;br&gt;
App Service: Move idle apps to Free or Consumption&lt;br&gt;
AKS: Scale to 0 or move to Azure Container Apps&lt;br&gt;
“Premium doesn’t mean better. Just pricier.” — Azure Advisor&lt;/p&gt;

&lt;p&gt;💡 Quick Win: Downgrade unused Redis cache from P1 to Basic.&lt;/p&gt;

&lt;p&gt;📉 Potential Savings: $500–$1500/month depending on SKUs&lt;/p&gt;

&lt;p&gt;✅ Start Here: Apply These 3 Today&lt;br&gt;
Add TTL to Cosmos + Service Bus&lt;br&gt;
Cap Function concurrency to 5&lt;br&gt;
Enable sampling in App Insights&lt;br&gt;
🔍 Watch your Azure bill dip within 48 hours.&lt;/p&gt;

&lt;p&gt;🧠 Final Thought&lt;br&gt;
Smart code isn’t just faster — it’s cheaper. And it starts today.&lt;/p&gt;

&lt;p&gt;💬 Found a pattern you’re using already — or one that surprised you?&lt;br&gt;
Drop it in the comments or share this with a teammate burning Azure cash.&lt;/p&gt;

</description>
      <category>dotnet</category>
      <category>azure</category>
      <category>csharp</category>
      <category>architecture</category>
    </item>
    <item>
      <title>Stop Using Dumb Search Bars: Build Smart, AI-Powered Search with Azure + .NET</title>
      <dc:creator>Mohammad Shoeb</dc:creator>
      <pubDate>Sat, 14 Jun 2025 04:43:53 +0000</pubDate>
      <link>https://forem.com/mohammad_shoeb_8cf8645287/stop-using-dumb-search-bars-build-smart-ai-powered-search-with-azure-net-3p8l</link>
      <guid>https://forem.com/mohammad_shoeb_8cf8645287/stop-using-dumb-search-bars-build-smart-ai-powered-search-with-azure-net-3p8l</guid>
      <description>&lt;p&gt;Users are typing smart questions, but your search bar is still stuck on keyword matching.&lt;br&gt;
Let’s change that.&lt;/p&gt;

&lt;p&gt;🎯 What You'll Build&lt;br&gt;
A .NET Web API that performs semantic + vector hybrid search using Azure Cognitive Search&lt;br&gt;
Integration with Azure OpenAI to implement Retrieval-Augmented Generation (RAG)&lt;br&gt;
Response enrichment that offers intelligent, context-aware answers&lt;/p&gt;

&lt;p&gt;🛠️ Prerequisites&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Azure subscription with Cognitive Search (Standard+) and Azure OpenAI resources&lt;/li&gt;
&lt;li&gt;.NET 7 or 8 SDK installed&lt;/li&gt;
&lt;li&gt;IDE: Visual Studio or VS Code&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;🗂️ Architecture Overview&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[User Query] → [.NET API] → [Azure Search (Hybrid)] → [Relevant Docs] → [Azure OpenAI RAG] → [Answer]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;⚙️ Step 1: Configure Azure Cognitive Search&lt;/p&gt;

&lt;p&gt;Create a Standard or higher-tier Search service in Azure Portal (required for semantic &amp;amp; vector features).&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;dotnet add package Azure.Search.Documents
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Note: Azure SKUs (e.g., Basic won’t work for vector/semantic).&lt;/p&gt;

&lt;p&gt;Azure CLI command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;az search service create --name my-search --sku standard --resource-group my-rg
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;📦 Step 2: Index Your Documents with Embeddings&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;var client = new SearchClient(new Uri(endpoint), indexName, new AzureKeyCredential(apiKey));
await client.UploadDocumentsAsync(new[]
{
    new
    {
        id = "doc1",
        content = "Azure OpenAI enables powerful GPT models...",
        contentVector = /* byte[] embedding from Azure OpenAI's embedding model */
    }
});
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;ℹ️ Use Azure OpenAI’s text-embedding-ada-002 or text-embedding-3-small to generate the vector.&lt;/p&gt;

&lt;p&gt;🔍 Step 3: Perform Hybrid Semantic + Vector Search&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;var options = new SearchOptions
{
    Vector = embeddingBytes,
    VectorFields = { "contentVector" },
    QueryType = SearchQueryType.Semantic,
    SemanticConfigurationName = "default",
    Size = 5
};
Response&amp;lt;SearchResults&amp;lt;SearchDocument&amp;gt;&amp;gt; response = await client.SearchAsync("azure ai", options);
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;💬 Step 4: Add Azure OpenAI for RAG&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;var openAi = new AzureOpenAIClient(new Uri(openAiEndpoint), new AzureKeyCredential(openAiKey));
var chat = openAi.GetChatCompletionsClient("gpt-35-turbo");

var resultDoc = response.Value.GetResults().First();
string resultContent = resultDoc.Document["content"].ToString();

var completion = await chat.GetChatCompletionsAsync(new ChatCompletionsOptions
{
    Messages = {
        new ChatMessage(ChatRole.System, "You are an AI assistant."),
        new ChatMessage(ChatRole.User, $"Answer based on: {resultContent}\n\nQuery: azure ai")
    },
    Temperature = 0.7f,
    MaxTokens = 256
});
Console.WriteLine(completion.Value.Choices[0].Message.Content);
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;📈 Real-World Use Cases&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Microsoft Learn: uses semantic + vector search for documentation lookup&lt;/li&gt;
&lt;li&gt;Enterprise RAG: internal knowledge base Q&amp;amp;A, compliance, automation&lt;/li&gt;
&lt;li&gt;Internal copilots: powering smart assistants across domains&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;🧠 GPT Prompt Engineering Tip&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;new ChatMessage(ChatRole.User, $"Answer like a search assistant. Use this context: {resultContent}")
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;✅ Summary &amp;amp; Takeaways&lt;br&gt;
In just 30 minutes, you've gone from basic keyword search to GPT-enhanced hybrid retrieval.&lt;/p&gt;

&lt;p&gt;Connected vector embeddings, semantic ranking, and GPT&lt;br&gt;
Production-ready with encryption, scaling, and observability&lt;/p&gt;

&lt;p&gt;🔗 References&lt;br&gt;
&lt;a href="https://learn.microsoft.com/en-us/azure/search/search-what-is-azure-search" rel="noopener noreferrer"&gt;Azure Cognitive Search – Official Docs&lt;/a&gt;&lt;br&gt;
&lt;a href="https://learn.microsoft.com/en-us/azure/search/hybrid-search-overview" rel="noopener noreferrer"&gt;Hybrid Search in Azure&lt;/a&gt;&lt;br&gt;
&lt;a href="https://learn.microsoft.com/en-us/azure/ai-services/openai/how-to/embeddings?tabs=console" rel="noopener noreferrer"&gt;Azure OpenAI Embeddings&lt;/a&gt;&lt;br&gt;
&lt;a href="https://learn.microsoft.com/en-us/dotnet/api/overview/azure/ai.openai-readme?view=azure-dotnet" rel="noopener noreferrer"&gt;Azure OpenAI .NET SDK&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;⭐ Final CTA&lt;br&gt;
💬 Found this helpful? Share it with your team, leave a clap, and follow for more .NET + Azure AI deep dives.&lt;/p&gt;

</description>
      <category>dotnet</category>
      <category>dotnetcore</category>
      <category>csharp</category>
      <category>ai</category>
    </item>
  </channel>
</rss>
