<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Kiran Rongali</title>
    <description>The latest articles on Forem by Kiran Rongali (@kiranrongali).</description>
    <link>https://forem.com/kiranrongali</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/kiranrongali"/>
    <language>en</language>
    <item>
      <title>What is Temporal.AI and How it helps in Building Reliable Workflows in .NET</title>
      <dc:creator>Kiran Rongali</dc:creator>
      <pubDate>Thu, 12 Feb 2026 16:25:06 +0000</pubDate>
      <link>https://forem.com/kiranrongali/what-is-temporalai-and-how-it-helps-in-building-reliable-workflows-in-net-53el</link>
      <guid>https://forem.com/kiranrongali/what-is-temporalai-and-how-it-helps-in-building-reliable-workflows-in-net-53el</guid>
      <description>&lt;p&gt;&lt;strong&gt;Building Reliable Workflows in .NET with Temporal&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Modern applications rarely perform just one simple action. A single business process might involve multiple microservices, APIs, databases, and external systems—and it might take minutes, hours, or even days to complete. Handling failures, retries, timeouts, and restarts in these long-running processes is one of the hardest parts of backend development.&lt;/p&gt;

&lt;p&gt;This is where Temporal comes in.&lt;/p&gt;

&lt;p&gt;Temporal is a workflow orchestration platform that helps you build reliable, fault-tolerant, long-running workflows using normal application code. With the Temporal .NET SDK, you can write workflows in C# while Temporal takes care of state management, retries, recovery, and scalability&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What Problem Does Temporal Solve?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In traditional .NET systems, long-running or multi-step processes often rely on:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Background jobs&lt;/li&gt;
&lt;li&gt;Message queues&lt;/li&gt;
&lt;li&gt;Cron jobs or schedulers&lt;/li&gt;
&lt;li&gt;Custom retry and state management logic&lt;/li&gt;
&lt;li&gt;Database tables to track progress&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This approach quickly becomes complex and fragile:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;What happens if the service crashes mid-process?&lt;/li&gt;
&lt;li&gt;How do you resume exactly where you left off?&lt;/li&gt;
&lt;li&gt;How do you avoid duplicate processing?&lt;/li&gt;
&lt;li&gt;How do you handle retries, timeouts, and compensation logic cleanly?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Temporal solves these problems by making your workflows durable, restartable, and fault-tolerant by design.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What Is Temporal, in Simple Terms?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Temporal is a workflow engine that runs your business processes reliably, even across failures, restarts, and long delays.&lt;/p&gt;

&lt;p&gt;You write workflows in code (C# in .NET), and Temporal:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Persists the workflow state&lt;/li&gt;
&lt;li&gt;Automatically retries failed steps&lt;/li&gt;
&lt;li&gt;Resumes execution after crashes&lt;/li&gt;
&lt;li&gt;Handles timeouts and delays&lt;/li&gt;
&lt;li&gt;Ensures exactly-once execution semantics for workflow logic&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Your code looks like normal code—but it runs with enterprise-grade reliability.&lt;/p&gt;

&lt;p&gt;Key Concepts in Temporal&lt;br&gt;
1-&amp;gt; Workflows&lt;/p&gt;

&lt;p&gt;Workflows define the business process. For example:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Order processing&lt;/li&gt;
&lt;li&gt;User onboarding&lt;/li&gt;
&lt;li&gt;Data integration pipeline&lt;/li&gt;
&lt;li&gt;Payment and refund flow&lt;/li&gt;
&lt;li&gt;Multi-step approval process&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Workflows are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Deterministic&lt;/li&gt;
&lt;li&gt;Durable&lt;/li&gt;
&lt;li&gt;Can run for seconds, hours, or months&lt;/li&gt;
&lt;li&gt;Automatically resumed after failures&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;2-&amp;gt; Activities&lt;/p&gt;

&lt;p&gt;Activities are the actual work:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Call an API&lt;/li&gt;
&lt;li&gt;Write to a database&lt;/li&gt;
&lt;li&gt;Send an email&lt;/li&gt;
&lt;li&gt;Process a file&lt;/li&gt;
&lt;li&gt;Run a validation&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Activities can fail and be retried automatically based on policies you define.&lt;/p&gt;

&lt;p&gt;3-&amp;gt; Workers&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Workers are .NET services that:&lt;/li&gt;
&lt;li&gt;Poll Temporal for work&lt;/li&gt;
&lt;li&gt;Execute workflows and activities&lt;/li&gt;
&lt;li&gt;Report results back to Temporal&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;4-&amp;gt; Temporal Server&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The Temporal server:&lt;/li&gt;
&lt;li&gt;Stores workflow state&lt;/li&gt;
&lt;li&gt;Tracks history&lt;/li&gt;
&lt;li&gt;Handles retries, timers, and scheduling&lt;/li&gt;
&lt;li&gt;Guarantees reliability and consistency&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Why Use Temporal in .NET Applications?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Reliability by Default &lt;br&gt;
If your .NET service crashes, restarts, or scales down:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Your workflow does not break&lt;/li&gt;
&lt;li&gt;It resumes exactly where it left off&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Built-in Retries and Timeouts&lt;br&gt;
No more custom retry loops everywhere:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Configure retry policies per activity&lt;/li&gt;
&lt;li&gt;Temporal handles backoff, limits, and failures&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Long-Running Processes Are Easy&lt;br&gt;
You can:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Wait for hours or days&lt;/li&gt;
&lt;li&gt;Wait for human input&lt;/li&gt;
&lt;li&gt;Wait for external systems&lt;/li&gt;
&lt;li&gt;Continue without losing state&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Clean Business Logic&lt;/p&gt;

&lt;p&gt;Your workflow code stays:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Readable&lt;/li&gt;
&lt;li&gt;Linear&lt;/li&gt;
&lt;li&gt;Maintainable
Instead of being split across queues, tables, and cron jobs.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;A .NET-Oriented Example Use Case&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Imagine a data integration pipeline in .NET:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Fetch data from a partner API&lt;/li&gt;
&lt;li&gt;Validate and transform the data&lt;/li&gt;
&lt;li&gt;Save it to your database&lt;/li&gt;
&lt;li&gt;Call another system&lt;/li&gt;
&lt;li&gt;If any step fails, retry with backoff&lt;/li&gt;
&lt;li&gt;If the service crashes, resume from the last successful step&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;With Temporal:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Each step is an Activity&lt;/li&gt;
&lt;li&gt;The whole flow is a Workflow&lt;/li&gt;
&lt;li&gt;Temporal handles: &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;-&amp;gt; Retries&lt;br&gt;
   -&amp;gt; State persistence&lt;br&gt;
   -&amp;gt; Crash recovery&lt;br&gt;
   -&amp;gt; Timeouts&lt;br&gt;
   -&amp;gt; Exactly-once workflow execution&lt;/p&gt;

&lt;p&gt;You don’t need to write custom “resume logic” or “status tables”.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How Temporal Fits into the .NET Ecosystem&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Temporal works great with:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;ASP.NET Core APIs&lt;/li&gt;
&lt;li&gt;Background services / workers&lt;/li&gt;
&lt;li&gt;Microservices architectures&lt;/li&gt;
&lt;li&gt;Azure / AWS / GCP deployments&lt;/li&gt;
&lt;li&gt;Message queues and event-driven systems&lt;/li&gt;
&lt;li&gt;Data integration and ETL pipelines&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Common patterns:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;ASP.NET API starts a Temporal workflow&lt;/li&gt;
&lt;li&gt;.NET worker executes activities&lt;/li&gt;
&lt;li&gt;Workflow coordinates calls to multiple services&lt;/li&gt;
&lt;li&gt;Temporal ensures reliability and consistency&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Temporal vs Traditional Approaches&lt;/strong&gt;&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Traditional Approach&lt;/th&gt;
&lt;th&gt;Problems&lt;/th&gt;
&lt;th&gt;Temporal&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Background jobs&lt;/td&gt;
&lt;td&gt;Hard to resume state&lt;/td&gt;
&lt;td&gt;Durable workflows&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Message queues&lt;/td&gt;
&lt;td&gt;You manage retries &amp;amp; state&lt;/td&gt;
&lt;td&gt;Built-in reliability&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Cron jobs&lt;/td&gt;
&lt;td&gt;No workflow state&lt;/td&gt;
&lt;td&gt;Full workflow history&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Custom orchestration&lt;/td&gt;
&lt;td&gt;Complex &amp;amp; error-prone&lt;/td&gt;
&lt;td&gt;Simple, code-based orchestration&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;When Should You Use Temporal?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Temporal is a great fit when you have:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Long-running business processes&lt;/li&gt;
&lt;li&gt;Multi-step workflows across services&lt;/li&gt;
&lt;li&gt;Complex retry and failure handling needs&lt;/li&gt;
&lt;li&gt;Integration pipelines&lt;/li&gt;
&lt;li&gt;Distributed transactions (Saga pattern)&lt;/li&gt;
&lt;li&gt;Human-in-the-loop workflows&lt;/li&gt;
&lt;li&gt;Critical processes that must not break or lose state&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If your system is “just a simple CRUD app,” you might not need it. But for enterprise workflows and integrations, Temporal can be a game-changer.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Conclusion&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Temporal brings a powerful idea to .NET development: write normal code, get enterprise-grade reliability for free.&lt;/p&gt;

&lt;p&gt;With the Temporal .NET SDK, you can build:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Durable workflows&lt;/li&gt;
&lt;li&gt;Fault-tolerant integrations&lt;/li&gt;
&lt;li&gt;Scalable background processes&lt;/li&gt;
&lt;li&gt;Clean, maintainable orchestration logic&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Instead of fighting failures, retries, and restarts, you let Temporal handle them—so you can focus on business logic, not plumbing.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>dotnet</category>
      <category>webdev</category>
      <category>programming</category>
    </item>
    <item>
      <title>Building Reliable and Scalable Data Integration Pipelines</title>
      <dc:creator>Kiran Rongali</dc:creator>
      <pubDate>Fri, 06 Feb 2026 18:59:36 +0000</pubDate>
      <link>https://forem.com/kiranrongali/building-reliable-and-scalable-data-integration-pipelines-1hi3</link>
      <guid>https://forem.com/kiranrongali/building-reliable-and-scalable-data-integration-pipelines-1hi3</guid>
      <description>&lt;p&gt;&lt;strong&gt;Building Reliable and Scalable Data Integration Pipelines&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In today’s digital world, organizations rarely rely on a single system. Data flows continuously between applications, databases, cloud services, partners, and analytics platforms. Making sure this data moves accurately, securely, and efficiently is the job of a data integration pipeline.&lt;/p&gt;

&lt;p&gt;A well-designed data integration pipeline is not just about moving data from Point A to Point B—it’s about ensuring quality, performance, scalability, and reliability across the entire data journey.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What Is a Data Integration Pipeline?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;A data integration pipeline is an automated process that:&lt;/p&gt;

&lt;p&gt;Extracts data from one or more source systems&lt;/p&gt;

&lt;p&gt;Transforms the data into the required format or structure&lt;/p&gt;

&lt;p&gt;Loads the data into a target system such as a database, data warehouse, API, or analytics platform&lt;/p&gt;

&lt;p&gt;This pattern is often called ETL (Extract, Transform, Load) or ELT (Extract, Load, Transform), depending on where transformations happen.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Common use cases include:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Syncing data between business applications&lt;/p&gt;

&lt;p&gt;Feeding data into reporting and analytics systems&lt;/p&gt;

&lt;p&gt;Integrating partner or third-party data&lt;/p&gt;

&lt;p&gt;Migrating data between systems&lt;/p&gt;

&lt;p&gt;Powering real-time or near-real-time workflows&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Core Components of a Data Integration Pipeline&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Data Sources&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;These can be:&lt;/p&gt;

&lt;p&gt;Databases (SQL Server, Oracle, PostgreSQL, etc.)&lt;/p&gt;

&lt;p&gt;APIs and web services&lt;/p&gt;

&lt;p&gt;Files (CSV, JSON, XML)&lt;/p&gt;

&lt;p&gt;Message queues and event streams&lt;/p&gt;

&lt;p&gt;SaaS applications (CRM, ERP, billing systems)&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Extraction Layer&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;This layer is responsible for:&lt;/p&gt;

&lt;p&gt;Connecting to source systems&lt;/p&gt;

&lt;p&gt;Pulling data in batches or streams&lt;/p&gt;

&lt;p&gt;Handling authentication, pagination, and rate limits&lt;/p&gt;

&lt;p&gt;Detecting changes (full load vs incremental load)&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Transformation Layer&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;This is where data is:&lt;/p&gt;

&lt;p&gt;Cleaned (remove duplicates, fix formats, handle nulls)&lt;/p&gt;

&lt;p&gt;Validated (check data types, ranges, mandatory fields)&lt;/p&gt;

&lt;p&gt;Mapped (convert source fields to target schema)&lt;/p&gt;

&lt;p&gt;Enriched (join with other data, add derived fields)&lt;/p&gt;

&lt;p&gt;Aggregated or filtered&lt;/p&gt;

&lt;p&gt;Good transformation logic ensures data quality and consistency across systems.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Loading Layer&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The final step writes data to:&lt;/p&gt;

&lt;p&gt;Data warehouses or data lakes&lt;/p&gt;

&lt;p&gt;Operational databases&lt;/p&gt;

&lt;p&gt;Search indexes&lt;/p&gt;

&lt;p&gt;Downstream applications or APIs&lt;/p&gt;

&lt;p&gt;This layer must handle:&lt;/p&gt;

&lt;p&gt;Bulk inserts vs upserts&lt;/p&gt;

&lt;p&gt;Idempotency (safe re-runs)&lt;/p&gt;

&lt;p&gt;Transaction management&lt;/p&gt;

&lt;p&gt;Error handling and retries&lt;/p&gt;

&lt;p&gt;Batch vs Real-Time Pipelines&lt;br&gt;
Batch Pipelines&lt;/p&gt;

&lt;p&gt;Run on schedules (hourly, daily, weekly)&lt;/p&gt;

&lt;p&gt;Process large volumes of data at once&lt;/p&gt;

&lt;p&gt;Simpler to design and maintain&lt;/p&gt;

&lt;p&gt;Ideal for reporting, analytics, and historical processing&lt;/p&gt;

&lt;p&gt;Real-Time (Streaming) Pipelines&lt;/p&gt;

&lt;p&gt;Process data as it arrives&lt;/p&gt;

&lt;p&gt;Lower latency&lt;/p&gt;

&lt;p&gt;More complex architecture&lt;/p&gt;

&lt;p&gt;Ideal for monitoring, alerts, personalization, and event-driven systems&lt;/p&gt;

&lt;p&gt;Many modern systems use a hybrid approach: real-time for critical data, batch for heavy processing and analytics.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Performance and Scalability Considerations&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;As data volume grows, performance becomes critical. This is where understanding time and space complexity really matters.&lt;/p&gt;

&lt;p&gt;Good practices include:&lt;/p&gt;

&lt;p&gt;Streaming or batching instead of loading everything into memory&lt;/p&gt;

&lt;p&gt;Avoiding nested loops over large datasets&lt;/p&gt;

&lt;p&gt;Using indexes, hash sets, or dictionaries for fast lookups&lt;/p&gt;

&lt;p&gt;Parallelizing work where possible&lt;/p&gt;

&lt;p&gt;Minimizing network calls by batching requests&lt;/p&gt;

&lt;p&gt;Processing only changed data (delta loads)&lt;/p&gt;

&lt;p&gt;A pipeline that works for 10,000 records may fail or become painfully slow at 10 million if not designed properly.&lt;/p&gt;

&lt;p&gt;Reliability and Error Handling&lt;/p&gt;

&lt;p&gt;Production-grade pipelines must expect failures:&lt;/p&gt;

&lt;p&gt;Network timeouts&lt;/p&gt;

&lt;p&gt;API rate limits&lt;/p&gt;

&lt;p&gt;Bad or unexpected data&lt;/p&gt;

&lt;p&gt;Partial system outages&lt;/p&gt;

&lt;p&gt;Key reliability patterns:&lt;/p&gt;

&lt;p&gt;Retries with backoff&lt;/p&gt;

&lt;p&gt;Dead-letter queues or error tables&lt;/p&gt;

&lt;p&gt;Checkpointing and resume capability&lt;/p&gt;

&lt;p&gt;Idempotent processing (safe to re-run)&lt;/p&gt;

&lt;p&gt;Detailed logging and monitoring&lt;/p&gt;

&lt;p&gt;Alerts for failures and data quality issues&lt;/p&gt;

&lt;p&gt;A good pipeline is not one that never fails—it’s one that fails safely and recovers gracefully.&lt;/p&gt;

&lt;p&gt;Security and Compliance&lt;/p&gt;

&lt;p&gt;Since pipelines often move sensitive data, security is critical:&lt;/p&gt;

&lt;p&gt;Encrypt data in transit and at rest&lt;/p&gt;

&lt;p&gt;Secure credentials using vaults or managed identities&lt;/p&gt;

&lt;p&gt;Apply least-privilege access&lt;/p&gt;

&lt;p&gt;Mask or tokenize sensitive fields where needed&lt;/p&gt;

&lt;p&gt;Maintain audit logs and data lineage&lt;/p&gt;

&lt;p&gt;Tools and Technologies&lt;/p&gt;

&lt;p&gt;Data integration pipelines can be built using:&lt;/p&gt;

&lt;p&gt;Custom code (.NET, Java, Python, Node.js)&lt;/p&gt;

&lt;p&gt;ETL/ELT tools (SSIS, Azure Data Factory, Informatica, Talend)&lt;/p&gt;

&lt;p&gt;Streaming platforms (Kafka, Azure Event Hubs)&lt;/p&gt;

&lt;p&gt;Cloud-native services (AWS Glue, Azure Synapse, GCP Dataflow)&lt;/p&gt;

&lt;p&gt;Or a combination of these&lt;/p&gt;

&lt;p&gt;The “best” tool depends on scale, complexity, budget, and team skills.&lt;/p&gt;

&lt;p&gt;Conclusion&lt;/p&gt;

&lt;p&gt;Data integration pipelines are the backbone of modern digital systems. A well-designed pipeline ensures that data is:&lt;/p&gt;

&lt;p&gt;Accurate&lt;/p&gt;

&lt;p&gt;Timely&lt;/p&gt;

&lt;p&gt;Secure&lt;/p&gt;

&lt;p&gt;Scalable&lt;/p&gt;

&lt;p&gt;Reliable&lt;/p&gt;

&lt;p&gt;By focusing not just on moving data, but on performance, quality, and resilience, organizations can build integration platforms that support growth, analytics, and real-time business decisions with confidence.&lt;/p&gt;

</description>
      <category>architecture</category>
      <category>data</category>
      <category>dataengineering</category>
      <category>systemdesign</category>
    </item>
    <item>
      <title>🚀 Accelerating Cloud Applications with Redis and Cloud-Based Caching Techniques</title>
      <dc:creator>Kiran Rongali</dc:creator>
      <pubDate>Thu, 19 Jun 2025 20:22:10 +0000</pubDate>
      <link>https://forem.com/kiranrongali/accelerating-cloud-applications-with-redis-and-cloud-based-caching-techniques-3j93</link>
      <guid>https://forem.com/kiranrongali/accelerating-cloud-applications-with-redis-and-cloud-based-caching-techniques-3j93</guid>
      <description>&lt;p&gt;🔍 Introduction&lt;br&gt;
Modern cloud-native applications demand low-latency, high-throughput access to data. To meet these performance expectations, caching is a foundational technique used to temporarily store frequently accessed data in memory, reducing expensive round trips to databases or APIs.&lt;/p&gt;

&lt;p&gt;One of the most widely adopted caching solutions in cloud environments is Redis, particularly in Azure through Azure Cache for Redis. But various cloud providers offer a rich landscape of caching options beyond Redis.&lt;/p&gt;

&lt;p&gt;In this article, we’ll explore:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;How Redis Cache works in Azure&lt;/li&gt;
&lt;li&gt;When and why to use caching&lt;/li&gt;
&lt;li&gt;Additional caching techniques in Azure&lt;/li&gt;
&lt;li&gt;Caching in other cloud platforms (AWS, GCP)&lt;/li&gt;
&lt;li&gt;Best practices for cloud caching&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;🌐 Redis Cache in Azure&lt;br&gt;
⚙️ What is Azure Cache for Redis?&lt;br&gt;
Azure Cache for Redis is a fully managed, open-source in-memory data store based on Redis. It’s built to handle:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Session state caching&lt;/li&gt;
&lt;li&gt;Database query results&lt;/li&gt;
&lt;li&gt;Static content (e.g., configuration, feature flags)&lt;/li&gt;
&lt;li&gt;Rate-limiting and pub/sub messaging&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;💡 Key Features:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Geo-replication&lt;/li&gt;
&lt;li&gt;Zone redundancy&lt;/li&gt;
&lt;li&gt;Private endpoints and VNET integration&lt;/li&gt;
&lt;li&gt;Enterprise tiers with Redis on Flash (for large datasets)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;🧪 Sample Use Case: .NET App Integration&lt;br&gt;
&lt;code&gt;var cache = ConnectionMultiplexer.Connect("yourredis.redis.cache.windows.net:6380,password=yourPassword,ssl=True");&lt;br&gt;
var db = cache.GetDatabase();&lt;br&gt;
await db.StringSetAsync("key", "value");&lt;br&gt;
string value = await db.StringGetAsync("key");&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;You can integrate Redis into a .NET Core or ASP.NET app for caching controller responses, user sessions, or computed results.&lt;/p&gt;

&lt;p&gt;🧰 Other Caching Techniques in Azure&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Technique&lt;/th&gt;
&lt;th&gt;Description&lt;/th&gt;
&lt;th&gt;Best Use Case&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;
&lt;strong&gt;In-Memory Caching&lt;/strong&gt; (MemoryCache)&lt;/td&gt;
&lt;td&gt;Caching data in application memory (e.g., .NET &lt;code&gt;MemoryCache&lt;/code&gt;)&lt;/td&gt;
&lt;td&gt;Lightweight apps, single-instance services&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Blob Cache/CDN&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Use Azure Blob Storage + Azure CDN to cache static files&lt;/td&gt;
&lt;td&gt;Media, documents, web assets&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Output Caching&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;ASP.NET Output/Response caching&lt;/td&gt;
&lt;td&gt;Web pages or API results&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;SQL Server Query Caching&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Caching execution plans or result sets&lt;/td&gt;
&lt;td&gt;Repetitive, read-heavy database queries&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;🌩️ Caching Across Other Cloud Platforms&lt;br&gt;
🔷 AWS&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Amazon ElastiCache (supports Redis and Memcached)&lt;/li&gt;
&lt;li&gt;CloudFront (CDN caching)&lt;/li&gt;
&lt;li&gt;DynamoDB Accelerator (DAX) – in-memory cache for DynamoDB&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;🔶 Google Cloud (GCP)&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Cloud Memorystore (for Redis and Memcached)&lt;/li&gt;
&lt;li&gt;Cloud CDN – integrated with Google Cloud Storage &amp;amp; Load Balancer&lt;/li&gt;
&lt;li&gt;App Engine Memcache API&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;🛠️ Other Tools &amp;amp; Patterns&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Local distributed cache (e.g., NCache for .NET, AppFabric legacy)&lt;/li&gt;
&lt;li&gt;Hybrid cache (local memory backed by cloud store)&lt;/li&gt;
&lt;li&gt;Write-through / write-behind cache for consistency with databases&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;✅ Caching Best Practices in the Cloud&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Use TTL (time-to-live) to prevent stale data&lt;/li&gt;
&lt;li&gt;Use cache aside pattern to populate cache on demand&lt;/li&gt;
&lt;li&gt;Avoid caching sensitive data in shared caches&lt;/li&gt;
&lt;li&gt;Partition large cache sets to reduce eviction pressure&lt;/li&gt;
&lt;li&gt;Enable diagnostics and monitoring (Azure Monitor, Prometheus, etc.)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;🧭 Conclusion&lt;br&gt;
Caching remains one of the most effective techniques to boost performance, reduce costs, and scale applications in the cloud. Azure Cache for Redis is a robust option for .NET and other workloads, but the broader cloud ecosystem offers various other tools tailored to different caching needs.&lt;/p&gt;

&lt;p&gt;By choosing the right caching strategy—whether distributed, local, or CDN—you can ensure your cloud applications are both performant and resilient.&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>programming</category>
      <category>beginners</category>
      <category>devops</category>
    </item>
    <item>
      <title>Azure vs AWS vs Google Cloud: Which Cloud Platform is Right for You?</title>
      <dc:creator>Kiran Rongali</dc:creator>
      <pubDate>Tue, 10 Jun 2025 20:00:03 +0000</pubDate>
      <link>https://forem.com/kiranrongali/azure-vs-aws-vs-google-cloud-which-cloud-platform-is-right-for-you-2024</link>
      <guid>https://forem.com/kiranrongali/azure-vs-aws-vs-google-cloud-which-cloud-platform-is-right-for-you-2024</guid>
      <description>&lt;p&gt;As cloud computing becomes the backbone of modern IT infrastructure, businesses are increasingly faced with a critical question: Which cloud provider should we choose? The top three contenders—Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP)—each offer powerful capabilities, but choosing between them depends on your organization's goals, tech stack, and long-term vision.&lt;/p&gt;

&lt;p&gt;Here’s a comprehensive comparison to help you make the best decision.&lt;/p&gt;




&lt;p&gt;⚖️ Compute Services&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;AWS: EC2, ECS, EKS, Lambda&lt;/li&gt;
&lt;li&gt;Azure: Virtual Machines, AKS, Azure Functions&lt;/li&gt;
&lt;li&gt;GCP: Compute Engine, GKE, Cloud Functions&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Takeaway: AWS offers the broadest options, Azure integrates best with Microsoft tools, and GCP shines with Kubernetes (GKE).&lt;/p&gt;




&lt;p&gt;📀 Storage Services&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;AWS: S3, EFS, Glacier&lt;/li&gt;
&lt;li&gt;Azure: Blob Storage, Files, Archive&lt;/li&gt;
&lt;li&gt;GCP: Cloud Storage, Filestore, Archive&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Takeaway: All three offer robust storage. AWS S3 is industry-standard, Azure is strong for enterprises, and GCP has excellent performance-tier options.
&lt;/h2&gt;

&lt;p&gt;🧐 AI &amp;amp; Machine Learning&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;AWS: SageMaker, Comprehend, Polly&lt;/li&gt;
&lt;li&gt;Azure: Cognitive Services, Azure Machine Learning&lt;/li&gt;
&lt;li&gt;GCP: Vertex AI, AutoML, Vision API&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Takeaway: GCP leads with intuitive and powerful ML tools, AWS offers breadth, and Azure is enterprise-friendly.&lt;/p&gt;




&lt;p&gt;🚗 Databases&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;AWS: RDS, DynamoDB, Redshift&lt;/li&gt;
&lt;li&gt;Azure: SQL Database, Cosmos DB, Synapse&lt;/li&gt;
&lt;li&gt;GCP: Cloud SQL, Firestore, BigQuery&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Takeaway: AWS has broad database support. Azure is optimal for SQL workloads. GCP's BigQuery is top-tier for analytics.&lt;/p&gt;




&lt;p&gt;⚙️ DevOps &amp;amp; Tooling&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;AWS: CodePipeline, CloudFormation&lt;/li&gt;
&lt;li&gt;Azure: Azure DevOps, GitHub Actions&lt;/li&gt;
&lt;li&gt;GCP: Cloud Build, Deployment Manager&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Takeaway: Azure DevOps stands out for integration. AWS is modular and flexible. GCP is strong in modern CI/CD.&lt;/p&gt;




&lt;p&gt;🔐 Security &amp;amp; Identity&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;AWS: IAM, KMS, GuardDuty&lt;/li&gt;
&lt;li&gt;Azure: Azure Active Directory, Key Vault&lt;/li&gt;
&lt;li&gt;GCP: IAM, Cloud KMS, Security Command Center&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Takeaway: Azure shines for identity (AAD), AWS offers granular security, GCP emphasizes security by design.&lt;/p&gt;




&lt;p&gt;💸 Pricing Models&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;AWS: Pay-as-you-go, Reserved Instances, Spot Pricing&lt;/li&gt;
&lt;li&gt;Azure: Hybrid Use Benefit, Reserved VM Instances&lt;/li&gt;
&lt;li&gt;GCP: Sustained and Committed Use Discounts&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Takeaway: GCP is most transparent with pricing. AWS offers the most options. Azure has discounts for Microsoft users.&lt;/p&gt;




&lt;p&gt;🌐 Global Presence&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;AWS: 33+ regions, 500+ edge locations&lt;/li&gt;
&lt;li&gt;Azure: 60+ regions, 200+ edge locations&lt;/li&gt;
&lt;li&gt;GCP: 40+ regions, expanding rapidly&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Takeaway: Azure leads in global coverage. AWS has the deepest infrastructure. GCP is growing steadily.&lt;/p&gt;




&lt;p&gt;📈 Summary Table&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2iah3j0dl39u35arrgbz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2iah3j0dl39u35arrgbz.png" alt="Image description" width="297" height="141"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;p&gt;🎯 Final Thoughts&lt;/p&gt;

&lt;p&gt;Each cloud provider has unique strengths:&lt;/p&gt;

&lt;p&gt;AWS is best for startups and enterprises needing a wide variety of services and scalability.&lt;/p&gt;

&lt;p&gt;Azure is ideal for enterprises already invested in Microsoft tools.&lt;/p&gt;

&lt;p&gt;GCP is perfect for organizations focused on AI, machine learning, and analytics.&lt;/p&gt;

&lt;p&gt;For many businesses, multi-cloud or hybrid cloud strategies provide the best of all worlds&lt;/p&gt;

</description>
      <category>architecture</category>
      <category>webdev</category>
      <category>programming</category>
      <category>beginners</category>
    </item>
    <item>
      <title>🔧 Why Support Teams Rely on Log Analytics, Bastion, and Firewalls in Azure</title>
      <dc:creator>Kiran Rongali</dc:creator>
      <pubDate>Tue, 10 Jun 2025 14:11:40 +0000</pubDate>
      <link>https://forem.com/kiranrongali/why-support-teams-rely-on-log-analytics-bastion-and-firewalls-in-azure-3l8k</link>
      <guid>https://forem.com/kiranrongali/why-support-teams-rely-on-log-analytics-bastion-and-firewalls-in-azure-3l8k</guid>
      <description>&lt;p&gt;In the fast-paced world of cloud infrastructure, support and operations teams play a crucial role in maintaining stability, security, and performance. Whether it's investigating a failed deployment, tracing suspicious activity, or accessing a virtual machine for emergency diagnostics — the right tools can make all the difference.&lt;/p&gt;

&lt;p&gt;Here’s why Log Analytics, Azure Bastion, and Azure Firewall have become essential tools in every support engineer’s toolbox.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Log Analytics – The Eyes of Your Cloud&lt;/strong&gt;&lt;br&gt;
Azure Log Analytics is the go-to platform for collecting and analyzing telemetry data from across your environment. It centralizes logs from virtual machines, apps, networks, and services into one place — searchable with powerful queries.&lt;/p&gt;

&lt;p&gt;Real-World Example:&lt;br&gt;
A support engineer investigates why an app went down at midnight. Using Log Analytics, they correlate app service logs, backend SQL performance data, and CPU metrics — all in one query. Issue found. Fixed. Documented.&lt;/p&gt;

&lt;p&gt;Why Support Teams Love It:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Real-time diagnostics using KQL&lt;/li&gt;
&lt;li&gt;Custom alerts on resource health and anomalies&lt;/li&gt;
&lt;li&gt;Unified visibility across resources&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Azure Bastion – Secure VM Access Without the Risk&lt;/strong&gt;&lt;br&gt;
When remote access to virtual machines is required, security must come first. Azure Bastion allows secure RDP and SSH access directly from the Azure portal, without exposing public IPs.&lt;/p&gt;

&lt;p&gt;Real-World Example:&lt;br&gt;
A support engineer needs to inspect logs inside a production VM. Rather than opening ports to the public internet, they connect via Azure Bastion — directly through the portal, securely and instantly.&lt;/p&gt;

&lt;p&gt;Why It’s Critical:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;No public IPs required on VMs&lt;/li&gt;
&lt;li&gt;No open RDP/SSH ports = reduced attack surface&lt;/li&gt;
&lt;li&gt;Seamless browser-based access&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Azure Firewall – Network Control with Visibility&lt;/strong&gt;&lt;br&gt;
Azure Firewall helps support teams enforce network security boundaries and analyze traffic flows. It ensures that only authorized communication happens between services and the outside world.&lt;/p&gt;

&lt;p&gt;Real-World Example:&lt;br&gt;
A database suddenly becomes unreachable. The support team checks Firewall logs and finds that a recently added rule accidentally blocked outbound access from the app subnet. Quick fix. Issue resolved.&lt;/p&gt;

&lt;p&gt;Key Benefits:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Centralized rule management (inbound/outbound)&lt;/li&gt;
&lt;li&gt;Threat Intelligence filtering (block known malicious IPs)&lt;/li&gt;
&lt;li&gt;Deep traffic inspection and logging&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Bringing It All Together&lt;br&gt;
Here’s how these tools work together in a support scenario:&lt;/p&gt;

&lt;p&gt;[User Issue] ─&amp;gt; [Log Analytics: trace logs] ─&amp;gt; [Bastion: inspect VM] ─&amp;gt;    [Firewall: verify access rules]&lt;/p&gt;

&lt;p&gt;These tools empower support teams to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Troubleshoot faster &lt;/li&gt;
&lt;li&gt;Access securely &lt;/li&gt;
&lt;li&gt;Respond confidently &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Final Thoughts:&lt;/p&gt;

&lt;p&gt;Support teams are the first line of defense when systems behave unexpectedly. Equipping them with tools like Log Analytics, Bastion, and Azure Firewall ensures faster resolution, better observability, and iron-clad security.&lt;/p&gt;

&lt;p&gt;🔧 These are not just tools — they’re your cloud safety net.&lt;/p&gt;

</description>
      <category>dotnet</category>
      <category>programming</category>
      <category>beginners</category>
      <category>devops</category>
    </item>
    <item>
      <title>🌐 Why You Should Use Azure Front Door and WAF to Protect Your APIs</title>
      <dc:creator>Kiran Rongali</dc:creator>
      <pubDate>Wed, 14 May 2025 19:07:16 +0000</pubDate>
      <link>https://forem.com/kiranrongali/why-you-should-use-azure-front-door-and-waf-to-protect-your-apis-54d</link>
      <guid>https://forem.com/kiranrongali/why-you-should-use-azure-front-door-and-waf-to-protect-your-apis-54d</guid>
      <description>&lt;p&gt;In today's cloud-native world, securing and optimizing API access is critical for performance, scalability, and protection against web-based threats. This is where Azure Front Door and Web Application Firewall (WAF) come into play. Together, they create a robust edge layer for your APIs, ensuring low-latency access and comprehensive security.&lt;/p&gt;

&lt;p&gt;What is Azure Front Door?&lt;/p&gt;

&lt;p&gt;Azure Front Door is a global, scalable entry point that routes traffic to your backend applications and APIs. It combines:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Smart Traffic Routing&lt;/li&gt;
&lt;li&gt;Load Balancing&lt;/li&gt;
&lt;li&gt;Global Content Delivery&lt;/li&gt;
&lt;li&gt;SSL Termination&lt;/li&gt;
&lt;li&gt;Application Acceleration&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;It's designed to deliver high availability and low latency using Microsoft's global edge network.&lt;/p&gt;

&lt;p&gt;What is Azure Web Application Firewall (WAF)?&lt;/p&gt;

&lt;p&gt;Azure WAF is a firewall designed to protect HTTP(S) applications from common threats like:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;SQL injection&lt;/li&gt;
&lt;li&gt;Cross-site scripting (XSS)&lt;/li&gt;
&lt;li&gt;Request smuggling&lt;/li&gt;
&lt;li&gt;OWASP Top 10 attacks&lt;/li&gt;
&lt;li&gt;Bot traffic&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;When enabled on Azure Front Door, it inspects and filters incoming requests before they reach your APIs.&lt;/p&gt;

&lt;p&gt;Front Door + WAF: API Protection Architecture&lt;/p&gt;

&lt;p&gt;Here's a high-level architecture diagram showing how Azure Front Door and WAF sit in front of your backend APIs:&lt;/p&gt;

&lt;p&gt;[Client Devices]&lt;br&gt;
     ⬇&lt;br&gt;
[Azure Front Door]&lt;br&gt;
     └️ WAF (Security policies applied)&lt;br&gt;
           ⬇&lt;br&gt;
   [Backend APIs - App Service / APIM / AKS]&lt;/p&gt;

&lt;p&gt;This architecture ensures:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Security: Malicious traffic is filtered before it touches your app&lt;/li&gt;
&lt;li&gt;Performance: Users are routed to the nearest healthy endpoint&lt;/li&gt;
&lt;li&gt;Reliability: Front Door handles failover and retries&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Key Benefits for APIs:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Feature                   - &amp;gt;         Benefit&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Global Load Balancing  -&amp;gt; Distribute traffic across geo-redundant backends&lt;/li&gt;
&lt;li&gt;WAF Protection  - &amp;gt;  Block OWASP top 10 vulnerabilities&lt;/li&gt;
&lt;li&gt;Fast Failover  - &amp;gt;   Reroute requests during backend outages&lt;/li&gt;
&lt;li&gt;Edge SSL Termination  - &amp;gt;  Secure and speed up client connections&lt;/li&gt;
&lt;li&gt;Rate Limiting  - &amp;gt;  Throttle abusive traffic&lt;/li&gt;
&lt;li&gt;Request Inspection  - &amp;gt;   Log and analyze malicious requests&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;🛠️ How to Implement&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Create a Front Door resource in Azure&lt;/li&gt;
&lt;li&gt;Add backend APIs (App Services, APIM, AKS)&lt;/li&gt;
&lt;li&gt;Configure routing rules (e.g., path-based routing)&lt;/li&gt;
&lt;li&gt;Attach WAF Policy with custom or default rules&lt;/li&gt;
&lt;li&gt;Test your endpoints through Front Door endpoint URL&lt;/li&gt;
&lt;li&gt;Monitor logs and metrics in Azure Monitor and Defender&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Real-World Example:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;A fintech company exposes APIs globally for mobile users. With Azure Front Door:&lt;/p&gt;

&lt;p&gt;Traffic is routed to the closest backend based on latency&lt;/p&gt;

&lt;p&gt;WAF blocks injection attacks before they reach the app&lt;/p&gt;

&lt;p&gt;SSL termination is handled at the edge&lt;/p&gt;

&lt;p&gt;API Management adds versioning and throttling&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Tips and Best Practices&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Enable logging to Application Insights&lt;/li&gt;
&lt;li&gt;Monitor WAF logs for repeated attack patterns&lt;/li&gt;
&lt;li&gt;Customize WAF rules (block, log, allow)&lt;/li&gt;
&lt;li&gt;Use versioned paths (/v1, /v2) and route based on path&lt;/li&gt;
&lt;li&gt;Integrate with APIM for rate limiting and developer portal&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Conclusion&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Using Azure Front Door + WAF is one of the most powerful patterns for protecting modern APIs. You gain:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;📈 Better performance globally&lt;/li&gt;
&lt;li&gt;🛡️ Strong security at the edge&lt;/li&gt;
&lt;li&gt;🚀 Scalable and reliable API delivery&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;When combined with Azure API Management, you have a complete enterprise-grade solution for API gateway, monitoring, security, and analytics.&lt;/p&gt;

&lt;p&gt;Start implementing Azure Front Door today, and build smarter, faster, and safer APIs for the cloud-native era.&lt;/p&gt;

</description>
      <category>dotnet</category>
      <category>architecture</category>
      <category>software</category>
      <category>java</category>
    </item>
    <item>
      <title>🔁 How Azure Service Bus Handles Retries + Dead-Letter Queue (DLQ) Monitor Part - 2</title>
      <dc:creator>Kiran Rongali</dc:creator>
      <pubDate>Wed, 07 May 2025 19:53:10 +0000</pubDate>
      <link>https://forem.com/kiranrongali/how-azure-service-bus-handles-retries-dead-letter-queue-dlq-monitor-part-2-5g4m</link>
      <guid>https://forem.com/kiranrongali/how-azure-service-bus-handles-retries-dead-letter-queue-dlq-monitor-part-2-5g4m</guid>
      <description>&lt;p&gt;As posted in &lt;a href="https://dev.to/kiranrongali/how-azure-service-bus-handles-retries-part-1-2gfg"&gt;How Azure Service Bus Handles Retries Part- 1 &lt;/a&gt;, the below post focuses on the retry flow and monitoring the Dead-Letter Queue (DLQ)&lt;/p&gt;

&lt;p&gt;Step-by-Step Upgrade&lt;br&gt;
1) Simulate a Failure in Message Processing&lt;br&gt;
Replace your original message processor with logic that randomly fails:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;csharp

var processor = client.CreateProcessor(queueName, new ServiceBusProcessorOptions());

processor.ProcessMessageAsync += async args =&amp;gt;
{
    string body = args.Message.Body.ToString();
    Console.WriteLine($"Received: {body}");

    // Simulate a failure randomly
    if (new Random().Next(1, 4) == 1)
    {
        throw new Exception("Simulated processing failure.");
    }

    await args.CompleteMessageAsync(args.Message);
    Console.WriteLine("Message completed successfully.");
};

processor.ProcessErrorAsync += async args =&amp;gt;
{
    Console.WriteLine($" Error Handler: {args.Exception.Message}");
    await Task.CompletedTask;
};
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This simulates real-world transient issues.&lt;/p&gt;

&lt;p&gt;2) Read from the Dead-Letter Queue (DLQ)&lt;br&gt;
This processor will read messages from the DLQ after they fail too many times:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;csharp

Console.WriteLine("Starting Dead-Letter Queue Listener...");

var dlqReceiver = client.CreateReceiver(queueName, new ServiceBusReceiverOptions
{
    SubQueue = SubQueue.DeadLetter
});

var deadLetters = await dlqReceiver.ReceiveMessagesAsync(maxMessages: 10, TimeSpan.FromSeconds(5));

if (deadLetters.Count == 0)
{
    Console.WriteLine(" No messages in DLQ.");
}
else
{
    Console.WriteLine($" Found {deadLetters.Count} message(s) in DLQ:");
    foreach (var msg in deadLetters)
    {
        Console.WriteLine($"  ❗ Message: {msg.Body}");
        await dlqReceiver.CompleteMessageAsync(msg); // Mark as handled
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can loop this or run it on a timer in a real app.&lt;/p&gt;

&lt;p&gt;Test It Out:&lt;/p&gt;

&lt;p&gt;Send messages with the original sender&lt;/p&gt;

&lt;p&gt;Let the random failure simulate retries&lt;/p&gt;

&lt;p&gt;Once the MaxDeliveryCount is hit (default: 10), the message will land in the DLQ&lt;/p&gt;

&lt;p&gt;DLQ monitor will read and log them&lt;/p&gt;

&lt;p&gt;Optional Improvements:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Push DLQ metrics to Application Insights&lt;/li&gt;
&lt;li&gt;Automatically re-send DLQ messages to retry queue (with delay)&lt;/li&gt;
&lt;li&gt;Add logging to a file or database&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Question:&lt;br&gt;
can we use even driven architecture when the message retry fails?&lt;/p&gt;

&lt;p&gt;Answer:&lt;br&gt;
Yes — using event-driven architecture even when message retries fail is not only possible, but it's also one of the best patterns to handle failure gracefully and reactively.&lt;/p&gt;

&lt;p&gt;Here’s how it works and why it’s useful:&lt;/p&gt;

&lt;p&gt;Using Event-Driven Architecture After Retry Fails&lt;/p&gt;

&lt;p&gt;What it means:&lt;br&gt;
Instead of letting failed messages silently die or sit in the DLQ, you can emit an event when a failure happens (after all retries), and other systems can react to that event.&lt;/p&gt;

&lt;p&gt;Typical Flow with Retry Failure&lt;/p&gt;

&lt;p&gt;Producer --&amp;gt; Queue --&amp;gt; Processor --&amp;gt; Fails N times&lt;br&gt;
                                      ↓&lt;br&gt;
                                Moved to DLQ&lt;br&gt;
                                      ↓&lt;br&gt;
Dead-letter handler raises an event → Notifier / Logger / Dashboard / Alert&lt;/p&gt;

&lt;p&gt;Examples of Event-Driven Responses After Failure&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvvez5ko6otyxr6t1uooi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvvez5ko6otyxr6t1uooi.png" alt="Image description" width="800" height="83"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;How to Implement&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Emit Custom Event on DLQ Handler
After reading from DLQ:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;csharp

foreach (var msg in deadLetters)
{
    var body = msg.Body.ToString();
    Console.WriteLine($" Dead-lettered: {body}");

    // Raise an event (example: publish to another queue or topic)
    var failureEvent = new ServiceBusMessage(JsonSerializer.Serialize(new
    {
        Type = "OrderProcessingFailed",
        OriginalPayload = body,
        Timestamp = DateTime.UtcNow
    }));

    await failureEventSender.SendMessageAsync(failureEvent);

    await dlqReceiver.CompleteMessageAsync(msg);
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You could publish this to:&lt;/p&gt;

&lt;p&gt;Another Service Bus queue or topic&lt;/p&gt;

&lt;p&gt;Event Grid&lt;/p&gt;

&lt;p&gt;A webhook or REST API&lt;/p&gt;

&lt;p&gt;A logging pipeline (e.g., Application Insights, Datadog)&lt;/p&gt;

&lt;p&gt;Why This Matters?&lt;br&gt;
Benefits of triggering events after retries fail:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Real-time observability of critical failures&lt;/li&gt;
&lt;li&gt;Prevent silent data loss in DLQ&lt;/li&gt;
&lt;li&gt;Let multiple teams (notifications, support, analytics) respond independently&lt;/li&gt;
&lt;li&gt;Enables automatic recovery, fallbacks, or audits&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Best Practices:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx3ndf92afwcedlqjbtpv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx3ndf92afwcedlqjbtpv.png" alt="Image description" width="800" height="81"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>softwaredevelopment</category>
      <category>dotnet</category>
      <category>architecture</category>
    </item>
    <item>
      <title>🔁 How Azure Service Bus Handles Retries Part- 1</title>
      <dc:creator>Kiran Rongali</dc:creator>
      <pubDate>Mon, 05 May 2025 20:23:56 +0000</pubDate>
      <link>https://forem.com/kiranrongali/how-azure-service-bus-handles-retries-part-1-2gfg</link>
      <guid>https://forem.com/kiranrongali/how-azure-service-bus-handles-retries-part-1-2gfg</guid>
      <description>&lt;p&gt;In the post below, we discussed how Azure Service Bus integrates with a .NET application and the benefits it provides. &lt;br&gt;
&lt;a href="https://dev.to/kiranrongali/why-use-azure-service-bus-how-to-integrate-in-net-13f9"&gt;https://dev.to/kiranrongali/why-use-azure-service-bus-how-to-integrate-in-net-13f9&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;But what happens if a failure occurs during message processing? How does retry behavior work, and how many times will it attempt to process the message again? This post will help you understand that process.&lt;/p&gt;

&lt;p&gt;When your message processor (receiver) throws an exception or fails to call CompleteMessageAsync, Azure Service Bus automatically retries processing that message, based on its configured max delivery count.&lt;/p&gt;

&lt;p&gt;Retry Behavior (Default)&lt;br&gt;
If your handler fails or throws an exception → message is re-delivered&lt;/p&gt;

&lt;p&gt;This retry continues until:&lt;br&gt;
The message is successfully completed (CompleteMessageAsync)&lt;br&gt;
The max delivery count is exceeded&lt;/p&gt;

&lt;p&gt;What Happens After Max Retries?&lt;br&gt;
Once the message exceeds the MaxDeliveryCount (default: 10), it is moved to the Dead-Letter Queue (DLQ).&lt;/p&gt;

&lt;p&gt;Dead-Letter Queue&lt;br&gt;
This is a special sub-queue where failed messages are stored for inspection, manual or automated reprocessing.&lt;/p&gt;

&lt;p&gt;You can access the DLQ like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#csharp
var receiver = client.CreateReceiver("your-queue-name", new ServiceBusReceiverOptions
{
    SubQueue = SubQueue.DeadLetter
});
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;What You Need to Do&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
Proper Error Handling
Wrap your message processing in try/catch:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#csharp
processor.ProcessMessageAsync += async args =&amp;gt;
{
    try
    {
        var body = args.Message.Body.ToString();
        Console.WriteLine($"Processing: {body}");

        // Your logic here

        await args.CompleteMessageAsync(args.Message);
    }
    catch (Exception ex)
    {
        Console.WriteLine($"Error: {ex.Message}");
        // Do NOT complete the message → it will retry
        // Optionally log or notify
    }
};
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
Configure MaxDeliveryCount
In Azure Portal → Service Bus Queue → Properties, you can set 
MaxDeliveryCount.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Or via ARM/CLI:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#json
"maxDeliveryCount": 5
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
Monitor the Dead-Letter Queue&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Use: Azure Portal (Explore Dead-letter queue)&lt;/p&gt;

&lt;p&gt;Azure Monitor / Alerts&lt;/p&gt;

&lt;p&gt;Code-based reprocessing via SubQueue.DeadLetter&lt;/p&gt;

&lt;p&gt;Optional: Auto Retry with Delay&lt;br&gt;
Azure Service Bus doesn’t support native delayed retries (e.g., exponential backoff), but you can:&lt;/p&gt;

&lt;p&gt;Use Scheduled Enqueue Time to requeue a message after a delay&lt;/p&gt;

&lt;p&gt;Move failed messages to a custom retry queue with delay&lt;/p&gt;

&lt;p&gt;Example of scheduling a retry:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#csharp
var retryMessage = new ServiceBusMessage("retry this")
{
    ScheduledEnqueueTime = DateTimeOffset.UtcNow.AddMinutes(5)
};
await sender.SendMessageAsync(retryMessage);
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Best Practices:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Use try/catch to Prevent app crashes &amp;amp; enable retry logic&lt;/li&gt;
&lt;li&gt;Use Monitor DLQ to Avoid silent message losses&lt;/li&gt;
&lt;li&gt;Use custom retry queues for control over backoff and limits&lt;/li&gt;
&lt;li&gt;Handle poison messages Log, alert, or notify for manual review&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That's it for this Part1 and Part 2 will be covered in a separate post, focusing on the retry flow and monitoring the Dead-Letter Queue (DLQ).&lt;/p&gt;

</description>
      <category>dotnet</category>
      <category>softwaredevelopment</category>
      <category>java</category>
      <category>architecture</category>
    </item>
    <item>
      <title>💡Why Use Azure Service Bus? How to Integrate in .NET</title>
      <dc:creator>Kiran Rongali</dc:creator>
      <pubDate>Wed, 30 Apr 2025 12:24:00 +0000</pubDate>
      <link>https://forem.com/kiranrongali/why-use-azure-service-bus-how-to-integrate-in-net-13f9</link>
      <guid>https://forem.com/kiranrongali/why-use-azure-service-bus-how-to-integrate-in-net-13f9</guid>
      <description>&lt;p&gt;Azure Service Bus enables reliable, asynchronous communication between microservices or distributed components in your app. It helps in:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Decoupling services (e.g., API triggers a background worker)&lt;/li&gt;
&lt;li&gt;Buffering traffic (e.g., spikes in request volume)&lt;/li&gt;
&lt;li&gt;Enabling retries and fault tolerance&lt;/li&gt;
&lt;li&gt;Scalable message handling with minimal coupling&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Step-by-Step: Azure Service Bus Integration in .NET&lt;/strong&gt;&lt;br&gt;
Prerequisites:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;An Azure subscription&lt;/li&gt;
&lt;li&gt;An Azure Service Bus namespace and a queue or topic&lt;/li&gt;
&lt;li&gt;A .NET 6 or later project (console, web API, etc.)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Step 1: Install the NuGet Package&lt;br&gt;
In your terminal or Package Manager Console, run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#bash
dotnet add package Azure.Messaging.ServiceBus

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Or via Visual Studio's NuGet UI:&lt;br&gt;
Manage NuGet Packages → Search for Azure.Messaging.ServiceBus → Install&lt;/p&gt;

&lt;p&gt;Step 2: Configure the Connection String&lt;br&gt;
Go to Azure Portal → Your Service Bus namespace → Shared Access Policies → RootManageSharedAccessKey → Copy the Connection String.&lt;/p&gt;

&lt;p&gt;Add it to appsettings.json:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#json
{
  "ServiceBus": {
    "ConnectionString": "&amp;lt;your-connection-string&amp;gt;",
    "QueueName": "your-queue-name"
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Step 3: Send a Message&lt;br&gt;
Sending a message using a ServiceBusSender&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#csharp
using Azure.Messaging.ServiceBus;

var client = new ServiceBusClient("&amp;lt;your-connection-string&amp;gt;");
var sender = client.CreateSender("your-queue-name");

var message = new ServiceBusMessage("Hello from .NET");
await sender.SendMessageAsync(message);

Console.WriteLine("Message sent!");

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can also use dependency injection and IConfiguration to get config from appsettings.json.&lt;/p&gt;

&lt;p&gt;Step 4: Receive Messages&lt;br&gt;
Using ServiceBusProcessor (recommended for background processing)&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#csharp
var processor = client.CreateProcessor("your-queue-name", new ServiceBusProcessorOptions());

processor.ProcessMessageAsync += async args =&amp;gt;
{
    string body = args.Message.Body.ToString();
    Console.WriteLine($"Received: {body}");

    await args.CompleteMessageAsync(args.Message);
};

processor.ProcessErrorAsync += args =&amp;gt;
{
    Console.WriteLine($"Error: {args.Exception.Message}");
    return Task.CompletedTask;
};

await processor.StartProcessingAsync();

// Optional: Stop when done
// await processor.StopProcessingAsync();

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Step 5: Use Dependency Injection (in ASP.NET Core)&lt;br&gt;
Add this to your Startup.cs or Program.cs:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#csharp
builder.Services.AddSingleton(serviceProvider =&amp;gt;
{
    var config = serviceProvider.GetRequiredService&amp;lt;IConfiguration&amp;gt;();
    return new ServiceBusClient(config["ServiceBus:ConnectionString"]);
});

builder.Services.AddSingleton(serviceProvider =&amp;gt;
{
    var client = serviceProvider.GetRequiredService&amp;lt;ServiceBusClient&amp;gt;();
    return client.CreateSender(config["ServiceBus:QueueName"]);
});

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Step 6: Bonus – Send JSON or Custom Object&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#csharp
var payload = new { OrderId = 123, Status = "Processed" };
var json = JsonSerializer.Serialize(payload);

var message = new ServiceBusMessage(json)
{
    ContentType = "application/json"
};
await sender.SendMessageAsync(message);

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Sample Use Case:&lt;/p&gt;

&lt;p&gt;Let’s say you're building a food delivery app:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The Order API sends an order placed event to the queue.&lt;/li&gt;
&lt;li&gt;A Delivery Service listens to the queue and schedules a delivery.&lt;/li&gt;
&lt;li&gt;A Notification Service sends an SMS or email, also triggered from the queue.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Summary:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1ezwccans1v2etgchtjo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1ezwccans1v2etgchtjo.png" alt="Image description" width="405" height="104"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>dotnet</category>
      <category>softwaredevelopment</category>
      <category>java</category>
      <category>architecture</category>
    </item>
    <item>
      <title>💬📦 Why Every Modern App Needs a Message Queue</title>
      <dc:creator>Kiran Rongali</dc:creator>
      <pubDate>Fri, 25 Apr 2025 02:00:40 +0000</pubDate>
      <link>https://forem.com/kiranrongali/why-every-modern-app-needs-a-message-queue-4dk7</link>
      <guid>https://forem.com/kiranrongali/why-every-modern-app-needs-a-message-queue-4dk7</guid>
      <description>&lt;p&gt;In today’s software landscape, applications are expected to be highly available, scalable, and resilient — all while providing a fast, seamless user experience. But as complexity increases with the adoption of microservices, cloud-native architecture, and real-time data processing, direct communication between services becomes fragile and hard to scale.&lt;/p&gt;

&lt;p&gt;That’s where message queues come in.&lt;/p&gt;

&lt;p&gt;What is a Message Queue?&lt;br&gt;
A message queue is a form of asynchronous service-to-service communication. Instead of a service calling another directly and waiting for a response, it places a message into a queue. The message is then picked up and processed by a separate service — either immediately or whenever it’s ready.&lt;/p&gt;

&lt;p&gt;Think of it like:&lt;br&gt;
A to-do list: someone adds tasks to it (producer), and someone else comes and does them (consumer) — without both needing to be there at the same time.&lt;/p&gt;

&lt;p&gt;Why Use a Message Queue?&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Decoupling Services
Without message queues, services are tightly connected. If one fails, the whole process can collapse. With queues, the sender and receiver are independent — one can function even if the other is temporarily down.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Example:&lt;br&gt;
An Order API accepts customer purchases.&lt;/p&gt;

&lt;p&gt;It places a message in the queue for:&lt;/p&gt;

&lt;p&gt;Inventory Service to update stock&lt;/p&gt;

&lt;p&gt;Payment Service to charge the customer&lt;/p&gt;

&lt;p&gt;Email Service to send confirmation&lt;/p&gt;

&lt;p&gt;Each service works independently, and failures in one don't block the others.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Asynchronous Processing
Some tasks take time (sending emails, generating reports). Message queues let you process those in the background, so the user isn’t kept waiting.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Example:&lt;br&gt;
A user uploads a large image to be analyzed.&lt;/p&gt;

&lt;p&gt;Your app queues the image for processing and returns a quick “We got it!” message.&lt;/p&gt;

&lt;p&gt;A background worker picks up the task, processes it, and stores the result.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Scalability
Queues naturally enable horizontal scaling — just add more consumers to process messages faster during peak load.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Example:&lt;br&gt;
During Black Friday, your e-commerce site receives 10,000 orders per minute.&lt;/p&gt;

&lt;p&gt;A queue holds all those orders.&lt;/p&gt;

&lt;p&gt;You spin up 50 instances of the Order Processor service to handle them in parallel.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Reliability and Fault Tolerance
Message queues offer durability and retry mechanisms. If a service fails or crashes mid-task, the queue holds the message and retries later or routes it to a dead-letter queue for review.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Example:&lt;br&gt;
Your Payment Processor crashes while processing an order.&lt;/p&gt;

&lt;p&gt;The message is re-queued and retried once the service is back up — no data loss.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Traffic Buffering
Queues smooth out sudden traffic spikes. Instead of overwhelming your system, requests are stored and processed gradually.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Example:&lt;br&gt;
Your app launches a viral marketing campaign.&lt;/p&gt;

&lt;p&gt;Instead of crashing from overload, your queue handles the surge and processes it smoothly over time.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg8tj5lkl2fojsf6mbq1m.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg8tj5lkl2fojsf6mbq1m.png" alt="Image description" width="665" height="193"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdthr5too4ghvt1f6t8i0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdthr5too4ghvt1f6t8i0.png" alt="Image description" width="390" height="211"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Real-World Scenario: With vs Without Message Queue&lt;br&gt;
Without Queue:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;User submits an order → API calls Billing → calls Inventory → sends Email
If any call fails, the whole process breaks
User may have to retry the entire flow manually
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;With Queue:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;User submits an order → API places messages in queues
→ Billing, Inventory, and Email services process tasks independently
Failures handled via retries or logged in dead-letter queues
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In a modern, cloud-native architecture, message queues aren’t optional — they’re essential. Whether you're building an e-commerce platform, a real-time analytics pipeline, or a scalable microservices backend, message queues offer the reliability, performance, and flexibility you need.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Decoupled&lt;/li&gt;
&lt;li&gt;Scalable&lt;/li&gt;
&lt;li&gt;Resilient&lt;/li&gt;
&lt;li&gt;Efficient&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In short, message queues give your architecture breathing room. Without them, your services become too tightly coupled and fragile — one broken link, and the entire chain suffers.&lt;/p&gt;

&lt;p&gt;If you're building or scaling any serious application and beyond, a message queue isn’t just a “nice-to-have” — it's a must.&lt;/p&gt;

</description>
      <category>softwaredevelopment</category>
      <category>dotnet</category>
      <category>java</category>
      <category>architecture</category>
    </item>
    <item>
      <title>⚙️Integrate Datadog with a .NET Application</title>
      <dc:creator>Kiran Rongali</dc:creator>
      <pubDate>Mon, 21 Apr 2025 17:28:28 +0000</pubDate>
      <link>https://forem.com/kiranrongali/integrate-datadog-with-a-net-application-1gac</link>
      <guid>https://forem.com/kiranrongali/integrate-datadog-with-a-net-application-1gac</guid>
      <description>&lt;p&gt;As mentioned in a previous post &lt;a href="https://dev.to/kiranrongali/how-datadog-helps-developers-teams-kkg"&gt;https://dev.to/kiranrongali/how-datadog-helps-developers-teams-kkg&lt;/a&gt;, we discussed how Datadog can benefit developers and teams. In this post, we'll focus on integrating it with a .NET application.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Step 1:&lt;/strong&gt; Create a Datadog Account&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;1.  Go to https://www.datadoghq.com.
2.  Sign up for a free trial.
3.  Once logged in, choose your region (important for API keys and endpoints).
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Step 2:&lt;/strong&gt; Install the Datadog Agent (on your dev machine, server, or container)&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;Download the agent from Datadog Agent Downloads.&lt;/li&gt;
&lt;li&gt;Follow the OS-specific instructions:

&lt;ul&gt;
&lt;li&gt;Windows: Run the .msi installer and enter your API Key during setup. &lt;/li&gt;
&lt;li&gt;Linux/macOS: Use the shell script provided.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;🔑 Your API key can be found in Datadog → Integrations → APIs.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Step 3:&lt;/strong&gt; Install Datadog Tracer for .NET
In your .NET Core or ASP.NET Core project:&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Install the NuGet package:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt; #bash
 dotnet add package Datadog.Trace
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Enable automatic instrumentation:&lt;/p&gt;

&lt;p&gt;Add this to your environment (or launch profile):&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  #bash

  DOTNET_Diagnostics=1
  DD_TRACE_ENABLED=true
  DD_ENV=dev
  DD_SERVICE=my-dotnet-app
  DD_VERSION=1.0.0
  DD_AGENT_HOST=localhost   # or your agent IP
  DD_TRACE_DEBUG=true       # optional for debugging
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Or set in appsettings.json (if using Datadog.Config):&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  #json
{
"DD_SERVICE": "my-dotnet-app",
"DD_ENV": "dev",
"DD_AGENT_HOST": "localhost",
"DD_TRACE_ENABLED": "true"
}
&lt;/code&gt;&lt;/pre&gt;


&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Step 4:&lt;/strong&gt; Enable APM (Tracing)
If using IIS or Kestrel, no code changes are needed but: 
To trace web requests, background jobs, or SQL calls:
&lt;/li&gt;
&lt;/ul&gt;

&lt;pre class="highlight plaintext"&gt;&lt;code&gt; //csharp
  using Datadog.Trace;

  public IActionResult Index()
   {
    var span = Tracer.Instance.StartActive("custom.operation");
    try
     {
      // your logic here
     }
    finally
     {
      span.Dispose(); // ends trace
     }

     return View();
    }
&lt;/code&gt;&lt;/pre&gt;

&lt;h2&gt;
  
  
  
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Step 5:&lt;/strong&gt; Enable Logging (Optional but Recommended)&lt;br&gt;
Use Serilog, NLog, or Microsoft Logging to forward logs:&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For Serilog:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#bash
dotnet add package Serilog.Sinks.Datadog.Logs
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;Example Config:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;   //csharp
   Log.Logger = new LoggerConfiguration()
    .WriteTo.DatadogLogs("&amp;lt;your-api-key&amp;gt;", source: "csharp", service: "my-dotnet-app")
    .CreateLogger();
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Step 6:&lt;/strong&gt; Verify in Datadog Dashboard&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Go to APM → Services in Datadog.&lt;/li&gt;
&lt;li&gt;Select your service (my-dotnet-app) to view:
Response times
Error rates
Traces and spans
Host metrics (CPU, memory, etc.)&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;p&gt;&lt;strong&gt;Step 7:&lt;/strong&gt; Set Up Dashboards &amp;amp; Alerts&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Create a custom dashboard:

&lt;ul&gt;
&lt;li&gt;Add widgets: charts, traces, logs, etc.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Add monitors/alerts: 

&lt;ul&gt;
&lt;li&gt;Example: “Notify if error rate &amp;gt; 5% for 5 min”&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Integrate with Slack, Teams, PagerDuty, etc. for alert delivery.&lt;/li&gt;
&lt;/ol&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;p&gt;&lt;strong&gt;Optional&lt;/strong&gt;: Monitor SQL, Redis, HTTP, etc.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Install additional NuGet packages if needed (e.g., Datadog.Trace.ClrProfiler.Managed).&lt;/li&gt;
&lt;li&gt;You can trace: 

&lt;ul&gt;
&lt;li&gt;SQL Server (SqlClient)&lt;/li&gt;
&lt;li&gt;HTTP calls (HttpClient)&lt;/li&gt;
&lt;li&gt;Entity Framework&lt;/li&gt;
&lt;li&gt;Background Services (e.g., Hangfire)&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;That's it!&lt;br&gt;
With this setup, you'll now have:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Real-time visibility into your .NET app’s performance&lt;/li&gt;
&lt;li&gt;Detailed traces and logs&lt;/li&gt;
&lt;li&gt;Dashboards and alerts&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>dotnet</category>
      <category>softwaredevelopment</category>
      <category>dba</category>
      <category>devops</category>
    </item>
    <item>
      <title>💡 How Datadog Helps Developers &amp; Teams</title>
      <dc:creator>Kiran Rongali</dc:creator>
      <pubDate>Thu, 17 Apr 2025 03:10:57 +0000</pubDate>
      <link>https://forem.com/kiranrongali/how-datadog-helps-developers-teams-kkg</link>
      <guid>https://forem.com/kiranrongali/how-datadog-helps-developers-teams-kkg</guid>
      <description>&lt;p&gt;Datadog is more than just a monitoring tool — it’s a full-stack observability platform that empowers developers, DevOps engineers, SREs, and even business teams to monitor, debug, optimize, and secure applications and infrastructure.&lt;/p&gt;

&lt;p&gt;Here’s a breakdown of how Datadog supports different stages and areas of the development and operations lifecycle:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
Application Performance Monitoring (APM)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Use Case: Identify Slow Code &amp;amp; Optimize Bottlenecks&lt;br&gt;
Tracks end-to-end performance of .NET Core, Java, Node.js, Python, and more. &lt;br&gt;
Visualizes method-level execution time, external dependencies (DBs, APIs), and latency.&lt;br&gt;
Provides service maps, flame graphs, and spans to isolate slow or failing areas.&lt;br&gt;
Example: A POST /api/order/checkout request takes 5 seconds. APM shows 3.5 seconds are spent on a SQL join, helping the dev fix the DB query.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt; 
Logs Management &amp;amp; Analysis&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Use Case: Centralize, Search &amp;amp; Correlate Logs Across Services&lt;br&gt;
Ingest logs from apps, servers, containers, firewalls, and more.&lt;br&gt;
Supports parsing, filtering, tagging, and alerting on logs.&lt;br&gt;
You can correlate logs with traces, helping pinpoint context fast.&lt;/p&gt;

&lt;p&gt;Example: A user reports a payment failure. Devs search logs for their email, see the stack trace, and click into the corresponding trace to understand what broke.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
Real-Time Metrics Monitoring&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Use Case: Monitor System Health &amp;amp; Custom Business Metrics&lt;br&gt;
Tracks infrastructure (CPU, memory, I/O) and app-specific metrics.&lt;br&gt;
Supports custom metrics via SDKs for .NET, Python, Java, etc.&lt;br&gt;
Dashboards visualize real-time &amp;amp; historical data for fast decision-making.&lt;/p&gt;

&lt;p&gt;Example: Developers track ItemsInCart.Count across time and regions to understand shopping trends or detect sudden drops.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
Distributed Tracing&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Use Case: Debug Across Microservices &amp;amp; Understand Dependencies&lt;br&gt;
See how a request flows through multiple services (microservices or serverless).&lt;br&gt;
Understand timing and failures at each step.&lt;br&gt;
Detect high-latency internal calls or timeout-prone external APIs.&lt;/p&gt;

&lt;p&gt;Example: A request flows through Auth → User → Billing → Notification services. You spot a 2s delay in Billing caused by an overloaded Redis instance.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
Infrastructure Monitoring&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Use Case: Visualize &amp;amp; Alert on Infrastructure in Cloud, Hybrid, or On-Prem&lt;br&gt;
Automatically discovers EC2, Azure VMs, GCP instances, Kubernetes nodes, Docker containers, and more.&lt;br&gt;
Provides live heatmaps, topology maps, and health overviews.&lt;br&gt;
Tracks disk usage, open connections, network traffic, etc.&lt;/p&gt;

&lt;p&gt;Example: A .NET app running in Azure AKS crashes intermittently. Datadog shows the node has disk pressure and evictions.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
Error Tracking&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Use Case: Identify, Aggregate &amp;amp; Prioritize Production Errors&lt;br&gt;
Tracks exceptions in your app: .NET, JavaScript, etc.&lt;br&gt;
Groups them by stack trace and frequency.&lt;br&gt;
Allows you to set alerts on new or high-volume errors.&lt;/p&gt;

&lt;p&gt;Example: After a deployment, 300+ NullReferenceException logs spike. Datadog notifies the dev team instantly, and they roll back before users notice.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
Synthetic Monitoring&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Use Case: Ensure Uptime &amp;amp; Measure End-User Experience&lt;br&gt;
Automates browser or HTTP tests against APIs and websites.&lt;br&gt;
Can simulate logins, form fills, and flows (e.g., checkout process).&lt;br&gt;
Checks from multiple global locations for regional issues.&lt;/p&gt;

&lt;p&gt;Example: Datadog detects a failed login flow at 2 AM via synthetics and alerts DevOps before users wake up.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
Real User Monitoring (RUM)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Use Case: Understand How Users Experience Your Frontend&lt;br&gt;
Tracks page load time, JS errors, AJAX calls, and user sessions.&lt;br&gt;
Detects slow assets or unhandled exceptions affecting UX.&lt;br&gt;
Filters by browser, location, OS, user ID.&lt;/p&gt;

&lt;p&gt;Example: Users on Safari 17.3 report lag. Datadog shows high Largest Contentful Paint (LCP) times for that browser only.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
Security Monitoring&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Use Case: Detect Security Threats &amp;amp; Misconfigurations&lt;br&gt;
Analyzes logs for suspicious activity (e.g., SQL injection attempts, brute-force login).&lt;br&gt;
Monitors compliance (PCI, HIPAA, SOC 2).&lt;br&gt;
Integrates with SIEM tools or your DevSecOps pipeline.&lt;/p&gt;

&lt;p&gt;Example: Detects an unusual volume of login attempts from a single IP. Automatically blocks it via an integrated firewall rule.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
CI/CD Pipeline Monitoring&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Use Case: Improve Build/Deploy Reliability&lt;br&gt;
Monitors builds, deploy times, failures, test coverage, and duration.&lt;br&gt;
Tracks deployments across environments and correlates them with performance regressions.&lt;/p&gt;

&lt;p&gt;Example: Deployment to prod causes latency spikes. Datadog tags traces/logs with that version, making rollback decision faster.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
Alerts &amp;amp; Notifications&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Use Case: Proactive Monitoring &amp;amp; Automated Response&lt;br&gt;
Create monitors for any metric, log pattern, or error.&lt;br&gt;
Supports anomaly detection and forecasting.&lt;br&gt;
Alerting via Slack, Teams, PagerDuty, Webhooks, Email, etc.&lt;/p&gt;

&lt;p&gt;Example: “Alert if HTTP 5xx rate &amp;gt; 1% for 5 mins” — Datadog auto-notifies the incident team.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
Custom Dashboards for Teams&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Use Case: Give Everyone the Insights They Need&lt;br&gt;
Devs: Debug traces, view live logs.&lt;br&gt;
Ops: View CPU, memory, and system alerts.&lt;br&gt;
Business: Track sales, active users, or feature usage.&lt;br&gt;
Security: Monitor unusual login attempts or audit trails.&lt;/p&gt;

&lt;p&gt;Example: Product Managers view a dashboard tracking premium feature usage and correlate it with recent deploys.&lt;/p&gt;

&lt;p&gt;&lt;u&gt;Final Thoughts&lt;/u&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa8rd422js6ypi76wmux3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa8rd422js6ypi76wmux3.png" alt="Image description" width="800" height="333"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Thank you for taking the time to read this. I hope you found these use cases helpful. Stay tuned for the next blog on integrating Datadog with a .NET application.**&lt;/p&gt;

</description>
      <category>devops</category>
      <category>dotnet</category>
      <category>dba</category>
      <category>softwaredevelopment</category>
    </item>
  </channel>
</rss>
