<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Ahsan Nabi Dar</title>
    <description>The latest articles on Forem by Ahsan Nabi Dar (@darnahsan).</description>
    <link>https://forem.com/darnahsan</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/darnahsan"/>
    <language>en</language>
    <item>
      <title>Building a Scalable Audit Logging Pipeline in Elixir: Handling Millions of Events Without Breaking Your Database</title>
      <dc:creator>Ahsan Nabi Dar</dc:creator>
      <pubDate>Mon, 29 Dec 2025 11:43:55 +0000</pubDate>
      <link>https://forem.com/darnahsan/building-a-scalable-audit-logging-pipeline-in-elixir-handling-millions-of-events-without-breaking-2l31</link>
      <guid>https://forem.com/darnahsan/building-a-scalable-audit-logging-pipeline-in-elixir-handling-millions-of-events-without-breaking-2l31</guid>
      <description>&lt;p&gt;In enterprise applications dealing with payroll, compensation, and benefits data, every change matters. When a single data point can cascade through multiple systems, affecting someone's salary or benefits package, having a complete audit trail isn't just nice to have—it's mission-critical. But capturing these events is only half the battle. The real challenge? Managing millions of audit records without turning your database into a bottleneck.&lt;br&gt;
Let me walk you through how we built a scalable audit logging pipeline for our Elixir-based application, handling over 5 million events monthly per client while keeping our core services fast and our data accessible.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Challenge
&lt;/h2&gt;

&lt;p&gt;Our core application is built entirely in Elixir, operating in both public and corporate domains. The complexity of our data flows is significant: each data point typically touches multiple workers and goes through an average of 6-7 transactions before settling. While Elixir's concurrency model handles millions of operations with ease, audit logging presents unique challenges that can't simply be solved by throwing more processes at the problem.&lt;br&gt;
The issue compounds quickly. With over 5 million audit events per month for a single client, we're looking at 60+ million records annually. Left unchecked, this growth pattern threatens to turn your primary database into a bottleneck, slowing down queries, bloating backups, and eventually impacting the performance of your core business operations.&lt;/p&gt;

&lt;h2&gt;
  
  
  Our Solution Architecture
&lt;/h2&gt;

&lt;p&gt;We designed a pipeline that separates audit capture from audit storage, leveraging the outbox pattern to ensure reliable event delivery while keeping our operational database lean.&lt;/p&gt;

&lt;h2&gt;
  
  
  Architecture Flow
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdqheocsmk478k9xpnpkp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdqheocsmk478k9xpnpkp.png" alt="Architecture Flow" width="800" height="2131"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  The Components
&lt;/h2&gt;

&lt;p&gt;&lt;em&gt;1. Carbonite: Capturing Changes at the Database Level&lt;/em&gt;&lt;br&gt;
Carbonite is an Elixir library that uses database triggers to automatically capture all changes to your tables. This approach offers several advantages:&lt;/p&gt;

&lt;p&gt;Zero application code changes: You don't need to manually log changes throughout your codebase&lt;br&gt;
Complete coverage: Every INSERT, UPDATE, and DELETE is captured automatically&lt;br&gt;
Transactional consistency: Audit records are created in the same transaction as the data changes&lt;br&gt;
Rich metadata: Captures before/after values, timestamps, and transaction context&lt;/p&gt;

&lt;p&gt;The beauty of Carbonite is that it operates at the database level. Even if you have complex business logic spread across multiple modules and processes, you get comprehensive audit coverage without littering your code with logging statements.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;2. The Outbox Pattern: Reliable Event Publishing&lt;/em&gt;&lt;br&gt;
The outbox pattern solves a critical problem: how do you reliably publish events from a transactional system? By writing audit events to an outbox table within the same database transaction as your business data, you ensure that either both succeed or both fail—no orphaned audit records or missed events.&lt;br&gt;
Carbonite's outbox utility reads from these outbox tables and publishes events to external systems. This decouples the write path (fast database operations) from the publish path (potentially slower network operations).&lt;/p&gt;

&lt;p&gt;&lt;em&gt;3. LavinMQ: Lightweight Message Broker&lt;/em&gt;&lt;br&gt;
We chose LavinMQ as our message broker for several reasons:&lt;/p&gt;

&lt;p&gt;AMQP protocol support: Industry-standard messaging protocol&lt;br&gt;
Lightweight and fast: Designed for high throughput with minimal resource overhead&lt;br&gt;
Reliable delivery: Ensures messages aren't lost between components&lt;br&gt;
Buffering capability: Handles traffic spikes gracefully&lt;/p&gt;

&lt;p&gt;LavinMQ acts as the shock absorber in our pipeline, allowing the audit capture rate to differ from the processing rate without losing data.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;4. Vector.dev: Event Processing and Routing&lt;/em&gt;&lt;br&gt;
Vector is a high-performance observability data pipeline that consumes messages from LavinMQ and handles:&lt;/p&gt;

&lt;p&gt;Transformation: Reshaping events into the desired format&lt;br&gt;
Enrichment: Adding metadata or contextual information&lt;br&gt;
Routing: Directing events to appropriate destinations&lt;br&gt;
Batching: Optimizing write patterns to storage&lt;/p&gt;

&lt;p&gt;Vector's configuration-as-code approach makes it easy to modify the pipeline without redeploying applications.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;5. Backblaze: Cost-Effective Long-Term Storage&lt;/em&gt;&lt;br&gt;
For long-term storage, we use Backblaze B2 via the S3-compatible API. Backblaze offers:&lt;/p&gt;

&lt;p&gt;Low cost: Significantly cheaper than traditional cloud object storage&lt;br&gt;
S3 compatibility: Works with existing S3 tooling and libraries&lt;br&gt;
Durability: Enterprise-grade data protection&lt;br&gt;
Scalability: Handles petabytes of data without configuration changes&lt;/p&gt;

&lt;h2&gt;
  
  
  Key Benefits of This Architecture
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Decoupling Operations&lt;/strong&gt;&lt;br&gt;
By separating audit capture, transport, and storage into distinct layers, each component can be scaled, maintained, and upgraded independently. Your core application doesn't need to know or care about where audit logs ultimately end up.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Multiple Destinations&lt;/strong&gt;&lt;br&gt;
Once audit events are flowing through the pipeline, routing them to multiple destinations becomes trivial. You might:&lt;/p&gt;

&lt;p&gt;Send recent events to a hot analytics database for real-time dashboards&lt;br&gt;
Archive all events to object storage for compliance&lt;br&gt;
Stream specific events to client-accessible APIs&lt;br&gt;
Feed events into a data lake for data warehouse construction&lt;br&gt;
Build data lineage graphs showing how values changed over time&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Specialized Tooling&lt;/strong&gt;&lt;br&gt;
Each component in our pipeline is purpose-built for its role. We're not trying to make our Elixir application handle message queuing, or forcing our database to store years of historical data. We use mature protocols (AMQP, S3) and battle-tested services, reducing the surface area for bugs and operational issues.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Performance at Scale&lt;/strong&gt;&lt;br&gt;
Our operational database stays fast because we're continuously moving audit data out. LavinMQ handles traffic spikes without blocking database commits. Vector batches writes to optimize storage operations. The result is a system that handles millions of events per month without breaking a sweat.&lt;/p&gt;

&lt;h2&gt;
  
  
  Implementation Considerations
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Monitoring and Observability&lt;/strong&gt;&lt;br&gt;
With a multi-component pipeline, observability is crucial. We monitor:&lt;/p&gt;

&lt;p&gt;Outbox table size and processing lag&lt;br&gt;
LavinMQ queue depths and consumption rates&lt;br&gt;
Vector processing rates and error counts&lt;br&gt;
Backblaze write success rates and latency&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Data Retention and Compliance&lt;/strong&gt;&lt;br&gt;
The pipeline makes it easy to implement sophisticated retention policies. Recent data might stay in a fast query layer for 90 days, then move to cheaper storage for 7 years to meet compliance requirements, then be deleted or archived to glacier storage.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Backpressure Handling&lt;/strong&gt;&lt;br&gt;
Each component needs to handle backpressure gracefully. If Vector can't keep up with LavinMQ, messages queue in the broker. If Backblaze is slow, Vector batches and retries. The outbox pattern ensures no data is lost even if downstream systems are temporarily unavailable.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Building a scalable audit logging pipeline requires thinking beyond simple database inserts. By leveraging the outbox pattern with Carbonite, and building a decoupled event pipeline with LavinMQ, Vector.dev, and Backblaze, we've created a system that:&lt;/p&gt;

&lt;p&gt;Captures every change automatically at the database level&lt;br&gt;
Handles millions of events per month without impacting application performance&lt;br&gt;
Provides flexibility to route audit data to multiple destinations&lt;br&gt;
Uses mature, purpose-built tools for each layer&lt;br&gt;
Scales horizontally without architectural changes&lt;/p&gt;

&lt;p&gt;For applications in regulated industries or where data lineage is critical, this architecture provides the foundation for comprehensive audit logging that doesn't compromise on performance or scalability. The key is recognizing that audit logging is a data pipeline problem, not just a database problem, and designing accordingly.&lt;/p&gt;

</description>
      <category>elixir</category>
      <category>outbox</category>
      <category>eventdriven</category>
      <category>data</category>
    </item>
    <item>
      <title>Building Resilient Event-Driven Systems: Lessons from the Distributed Trenches</title>
      <dc:creator>Ahsan Nabi Dar</dc:creator>
      <pubDate>Tue, 04 Nov 2025 07:49:55 +0000</pubDate>
      <link>https://forem.com/darnahsan/building-resilient-event-driven-systems-lessons-from-the-distributed-trenches-2akg</link>
      <guid>https://forem.com/darnahsan/building-resilient-event-driven-systems-lessons-from-the-distributed-trenches-2akg</guid>
      <description>&lt;p&gt;When you first look at a distributed architecture diagram with services scattered across multiple cloud providers, regions, and continents, it's easy to feel overwhelmed. Network partitions, timeouts, SSL handshake failures, connection drops—the list of things that can go wrong seems endless. And they do go wrong, constantly. In our globally distributed application running across 5 regions on Fly.io, we see these failures every single day. Phoenix.PubSub disconnecting from Redis. PostgreSQL protocols timing out mid-query. RabbitMQ brokers closing connections unexpectedly. It looks scary, and honestly, it should be. But here's the thing: with the right architectural choices, the right technology stack, and a healthy respect for the fallacies of distributed computing, you can build systems that not only survive these failures but thrive because of how they handle them.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Network Is Not Reliable (And Never Will Be)
&lt;/h2&gt;

&lt;p&gt;Peter Deutsch's famous "8 Fallacies of Distributed Computing" starts with the most dangerous assumption: the network is reliable. It's not. When you're orchestrating services across Redis on Upstash, LavinMQ on CloudAMPQ, Postgres on Neon, and storage on Backblaze B2—all while serving users globally through BunnyCDN—you're essentially building on top of organized chaos. The cost of communication is never zero, and latency isn't just a number on a dashboard; it's a real constraint that shapes your entire architecture.&lt;br&gt;
Looking at our Sentry error logs tells the story plainly: DBConnection.ConnectionError: ssl recv (idle): timeout, Phoenix.PubSub disconnected from Redis with reason :ssl_closed, Cannot connect to RabbitMQ broker: :timeout. These aren't exceptional circumstances; they're Tuesday. The errors show up, they resolve themselves, and the application keeps running. That's not luck—that's design.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftkybb3b6xuitx07x17bu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftkybb3b6xuitx07x17bu.png" alt=" " width="800" height="193"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjxnwmsxxc9lj2w7xdxk4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjxnwmsxxc9lj2w7xdxk4.png" alt=" " width="800" height="188"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Let It Crash: More Than Just a Slogan
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7fjdpt3cez0ak4sjwevr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7fjdpt3cez0ak4sjwevr.png" alt=" " width="800" height="51"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Elixir's "let it crash" philosophy, inherited from Erlang's decades of building fault-tolerant telecom systems, is often misunderstood. It doesn't mean writing careless code or ignoring errors. It means designing systems where failures are isolated, supervised, and automatically recovered from. When a PostgreSQL connection times out after 4 days of idle time, the connection process crashes, and the supervisor immediately spawns a new one. When Phoenix.PubSub loses its Redis connection, it gracefully reconnects without taking down the entire application.&lt;br&gt;
This is visible in our error patterns. Notice how the Postgrex protocol disconnections show up with (No error message) but are all marked as Resolved. The system detected the failure, handled it, and moved on. No manual intervention. No emergency pages at 3 AM. The BEAM VM's process isolation means that a failing database connection doesn't cascade into a system-wide outage—it's contained, logged, and recovered.&lt;/p&gt;

&lt;h2&gt;
  
  
  Event-Driven Architecture: Decoupling for Resilience
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkjntk6fl6vpu57nddtwk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkjntk6fl6vpu57nddtwk.png" alt=" " width="800" height="241"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;At the heart of our architecture lies event-driven design, and it's what allows us to scale without losing data. When a user uploads a file, that action triggers events that flow through the system asynchronously. The web request completes quickly, returning control to the user, while the actual processing happens in the background through LavinMQ for inter-service communication and Oban for background job processing.&lt;br&gt;
Here's why this matters: when LavinMQ has a momentary connection issue (as we see in the logs with RuntimeError: unexpected error when connecting to RabbitMQ broker), the messages aren't lost. They're persisted in PostgreSQL through Oban's reliable job queue. The system retries with exponential backoff—starting with short delays and gradually increasing them to avoid overwhelming a recovering service. This retry strategy is visible in our error resolution times: some errors resolve in minutes, others take hours, but they all resolve without manual intervention.&lt;br&gt;
Using Postgres as Oban's backend is a deliberate choice. While LavinMQ handles the real-time, high-throughput message passing between services, Oban manages critical background tasks that absolutely cannot be lost. Database-backed job queues give us transactional guarantees—if the job is enqueued, it will eventually be processed, even if it takes multiple retries across connection failures.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Technology Stack: Intentional Choices
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftk1jgmy4bogrc6exowya.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftk1jgmy4bogrc6exowya.png" alt=" " width="800" height="424"&gt;&lt;/a&gt;&lt;br&gt;
Every piece of our stack was chosen with resilience and global distribution in mind:&lt;br&gt;
Fly.io provides globally distributed compute with applications running in 5 regions simultaneously. When a region has issues, traffic automatically routes to healthy regions. The edge-first architecture means users always connect to the nearest available instance.&lt;br&gt;
Upstash Redis gives us globally replicated cache and pub/sub capabilities. Even when connections drop (as they inevitably do), the application degrades gracefully, fetching from the database instead of serving stale cache or crashing.&lt;br&gt;
CloudAMPQ's LavinMQ offers a lightweight, high-performance message broker. Its RabbitMQ compatibility means we get the proven AMQP protocol with better performance characteristics. Message persistence ensures that even during connection issues, messages wait in queues rather than disappearing into the void.&lt;br&gt;
Neon's Postgres provides serverless Postgres with branching and point-in-time recovery. The connection pooling and automatic scaling mean we can handle traffic spikes without manual database provisioning. When we see those ssl recv (idle): timeout errors, it's often because a connection has been idle during low-traffic periods—Neon's serverless nature shuts down idle resources, and our connection pooler handles reconnection transparently.&lt;br&gt;
Backblaze B2 with S3 protocol gives us cost-effective, reliable object storage. The S3 compatibility means we can use battle-tested client libraries, and the global CDN integration through BunnyCDN ensures low-latency access worldwide.&lt;br&gt;
CircleCI handles our continuous deployment pipeline, automatically deploying code changes across all regions. This is crucial because fixing issues often means deploying new code, and we need that process to be fast and reliable.&lt;br&gt;
Sentry is our observability layer, showing us not just when things fail, but how they fail and how they recover. The error patterns in Sentry guided many of our retry strategies and timeout configurations.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Reactive Manifesto in Practice
&lt;/h2&gt;

&lt;p&gt;Our architecture embodies the principles of the Reactive Manifesto—responsive, resilient, elastic, and message-driven:&lt;br&gt;
Responsive: Users get fast responses because we don't block on slow operations. File uploads return immediately while processing happens asynchronously.&lt;br&gt;
Resilient: Failures are isolated and don't cascade. A database timeout doesn't crash the application; it crashes a single connection process that's immediately restarted.&lt;br&gt;
Elastic: Our globally distributed compute and serverless database scale up and down based on demand without manual intervention.&lt;br&gt;
Message-driven: Events flow through message queues, decoupling services and allowing them to process work at their own pace while maintaining backpressure.&lt;/p&gt;

&lt;h2&gt;
  
  
  Practical Patterns for Resilience
&lt;/h2&gt;

&lt;p&gt;Several concrete patterns emerge from our experience:&lt;br&gt;
Exponential backoff with jitter spaces out retries intelligently. The first retry happens quickly (maybe the network hiccup was momentary), but subsequent retries back off exponentially, with random jitter to prevent thundering herd problems.&lt;br&gt;
Dead letter queues capture messages that fail repeatedly after all retries. This prevents poison messages from blocking queue processing while ensuring we can investigate and manually recover them later.&lt;br&gt;
Idempotent operations ensure that retries don't cause duplicate side effects. When LavinMQ redelivers a message after a connection issue, we can safely process it again without corrupting data.&lt;br&gt;
Graceful degradation means falling back to reduced functionality rather than failing completely. If Redis is down, we serve from the stale in memory cache rather than throwing errors.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Cost of Resilience
&lt;/h2&gt;

&lt;p&gt;Building resilient distributed systems isn't free. There's operational complexity—more services to monitor, more failure modes to understand, more logs to analyze. There's performance overhead—retries mean higher latency for failed operations, circuit breakers mean some requests get rejected that might have eventually succeeded.&lt;br&gt;
But the alternative is worse. Without proper retry logic, timeout handling, and failure isolation, that DBConnection.ConnectionError doesn't just log to Sentry and resolve itself—it crashes your application. That momentary Redis connection drop doesn't trigger a graceful reconnection—it takes down your real-time features until someone manually restarts the service.&lt;br&gt;
Our error logs show the price we pay for resilience: hundreds of timeout errors, connection failures, and protocol disconnections every week. But they also show something more important: they all resolve themselves. The system heals, automatically, without human intervention. That's what resilience looks like in production.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion: Embrace the Chaos
&lt;/h2&gt;

&lt;p&gt;Distributed systems are inherently chaotic. Networks partition, services timeout, connections close unexpectedly. You can't prevent these failures, but you can design for them. Elixir's supervisor trees give you automatic recovery. Event-driven architecture gives you decoupling and scalability. Proper retry strategies give you resilience. Message queues give you durability.&lt;br&gt;
When we look at our Sentry dashboard and see all those resolved errors—PostgreSQL timeouts, Redis disconnections, RabbitMQ failures—we don't see problems. We see evidence that the system is working as designed. Each of those errors represents a moment where the system detected a failure, handled it gracefully, and recovered automatically.&lt;br&gt;
Building distributed systems is still scary. The architecture diagrams are still complex. The failure modes are still numerous. But with the right tools, the right patterns, and a healthy respect for the network's unreliability, you can build systems that are truly resilient—systems that bend but don't break, that stumble but don't fall, that meet the challenges of global distribution head-on and emerge stronger for it.&lt;br&gt;
The network will fail. Your services will timeout. Connections will drop. Build accordingly.&lt;/p&gt;

</description>
      <category>distributedsystems</category>
      <category>elixir</category>
      <category>cloud</category>
      <category>serverless</category>
    </item>
    <item>
      <title>Bridging the AI Gap: Simplifying LLM Development in Elixir with langchain</title>
      <dc:creator>Ahsan Nabi Dar</dc:creator>
      <pubDate>Thu, 01 May 2025 16:11:57 +0000</pubDate>
      <link>https://forem.com/darnahsan/bridging-the-ai-gap-simplifying-llm-development-in-elixir-with-langchainex-3o42</link>
      <guid>https://forem.com/darnahsan/bridging-the-ai-gap-simplifying-llm-development-in-elixir-with-langchainex-3o42</guid>
      <description>&lt;p&gt;Artificial Intelligence is evolving at a breakneck pace. Every week, it feels like there’s a new breakthrough, a novel architecture, or yet another company launching its own large language model (LLM). With so much innovation happening simultaneously, keeping up with the ever-changing landscape can feel overwhelming — even for seasoned developers.&lt;/p&gt;

&lt;p&gt;One of the biggest challenges developers face today is inconsistency across platforms. While OpenAI pioneered the API format that many now follow, major inference service providers such as Anthropic, Hugging Face, and Google Cloud have each developed their own unique interfaces. This fragmentation makes it difficult to switch between models or providers without rewriting significant chunks of code.&lt;/p&gt;

&lt;p&gt;This is where LangChain comes into play. Originally built for Python, LangChain quickly became the go-to framework for working with LLMs. It introduced a standardized way to interact with different models and APIs, abstracting away the complexity and letting developers focus on building powerful applications.&lt;/p&gt;

&lt;p&gt;But what if you're an Elixir developer? That’s where the Elixir community steps in with langchain — an Elixir port of the LangChain framework.&lt;/p&gt;

&lt;p&gt;Created by members of the Elixir ecosystem, langchain brings the same power and flexibility of LangChain to Elixir developers. It provides a consistent abstraction layer over various inference providers, allowing you to seamlessly integrate with OpenAI, Anthropic, Hugging Face, and more — all without worrying about the nuances of each provider’s API.&lt;/p&gt;

&lt;p&gt;Recently, I was working with Google Gemini models via their Vertex AI API . Initially, I had hand-rolled the HTTP requests using Google’s API format. But after discovering langchain, I decided to refactor my implementation to use the framework instead.&lt;/p&gt;

&lt;p&gt;However, I hit a roadblock — file URL support wasn’t implemented yet for Vertex AI .&lt;/p&gt;

&lt;p&gt;Now here’s where things get interesting:&lt;/p&gt;

&lt;p&gt;While Google AI Studio only allows file URLs uploaded through its own file service, Vertex AI supports processing media from any publicly crawlable third-party URL — a very useful feature for real-world applications where files might live outside of Google’s ecosystem.&lt;/p&gt;

&lt;p&gt;At the time, the Elixir community had just added file URL support as recent as last month for Google AI Studio , likely because it’s the more commonly used platform among casual users. But Vertex AI , popular among enterprise developers and production setups, didn't yet have this capability in langchain.&lt;/p&gt;

&lt;p&gt;So I decided to step in.&lt;/p&gt;

&lt;p&gt;Building on top of the great work already done by the community, I submitted a &lt;a href="https://github.com/brainlid/langchain/pull/296" rel="noopener noreferrer"&gt;PR&lt;/a&gt; to add full file URL support for Gemini models via Vertex AI in langchain — and good news: it got merged!&lt;/p&gt;

&lt;p&gt;This enhancement now enables developers to:&lt;/p&gt;

&lt;p&gt;Pass public URLs directly to Gemini models via Vertex AI&lt;br&gt;
Process images, PDFs, and other media formats hosted externally&lt;br&gt;
Leverage the full capabilities of Gemini within the Elixir ecosystem&lt;/p&gt;

&lt;p&gt;Without langchain the implementation to Vertex AI looks as such where you need to maintain the request format and which will change for each provider.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight elixir"&gt;&lt;code&gt; &lt;span class="k"&gt;defp&lt;/span&gt; &lt;span class="n"&gt;inference&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;prompt&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;mime_type&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;content_url&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;do&lt;/span&gt;
     &lt;span class="n"&gt;json&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;%{&lt;/span&gt;
       &lt;span class="s2"&gt;"contents"&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;%{&lt;/span&gt;
         &lt;span class="s2"&gt;"role"&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="s2"&gt;"user"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
         &lt;span class="s2"&gt;"parts"&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
           &lt;span class="p"&gt;%{&lt;/span&gt;
             &lt;span class="s2"&gt;"fileData"&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;%{&lt;/span&gt;
               &lt;span class="s2"&gt;"mimeType"&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;mime_type&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
               &lt;span class="s2"&gt;"fileUri"&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;content_url&lt;/span&gt;
             &lt;span class="p"&gt;}&lt;/span&gt;
           &lt;span class="p"&gt;},&lt;/span&gt;
           &lt;span class="p"&gt;%{&lt;/span&gt;
             &lt;span class="s2"&gt;"text"&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;prompt&lt;/span&gt;
           &lt;span class="p"&gt;}&lt;/span&gt;
         &lt;span class="p"&gt;]&lt;/span&gt;
       &lt;span class="p"&gt;},&lt;/span&gt;
       &lt;span class="s2"&gt;"generationConfig"&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;%{&lt;/span&gt;
         &lt;span class="s2"&gt;"temperature"&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="mf"&gt;1.0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
         &lt;span class="s2"&gt;"topP"&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="mf"&gt;0.8&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
         &lt;span class="s2"&gt;"topK"&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="mi"&gt;10&lt;/span&gt;
       &lt;span class="p"&gt;}&lt;/span&gt;
     &lt;span class="p"&gt;}&lt;/span&gt;

     &lt;span class="no"&gt;Req&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;post!&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;vertex_endpoint&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt;
       &lt;span class="ss"&gt;auth:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="ss"&gt;:bearer&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;auth_token&lt;/span&gt;&lt;span class="p"&gt;()},&lt;/span&gt;
       &lt;span class="ss"&gt;json:&lt;/span&gt; &lt;span class="n"&gt;json&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
       &lt;span class="ss"&gt;receive_timeout:&lt;/span&gt; &lt;span class="nv"&gt;@timeout&lt;/span&gt;
     &lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;body&lt;/span&gt;
     &lt;span class="o"&gt;|&amp;gt;&lt;/span&gt; &lt;span class="k"&gt;case&lt;/span&gt; &lt;span class="k"&gt;do&lt;/span&gt;
       &lt;span class="p"&gt;%{&lt;/span&gt;&lt;span class="s2"&gt;"candidates"&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;[%{&lt;/span&gt;&lt;span class="s2"&gt;"content"&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;%{&lt;/span&gt;&lt;span class="s2"&gt;"parts"&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;[%{&lt;/span&gt;&lt;span class="s2"&gt;"text"&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;response&lt;/span&gt;&lt;span class="p"&gt;}]}}]}&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="ss"&gt;:ok&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;response&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;

       &lt;span class="p"&gt;%{&lt;/span&gt;&lt;span class="s2"&gt;"error"&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;%{&lt;/span&gt;
           &lt;span class="s2"&gt;"code"&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;code&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="s2"&gt;"details"&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;_details&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
           &lt;span class="s2"&gt;"message"&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;message&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
           &lt;span class="s2"&gt;"status"&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;status&lt;/span&gt;
         &lt;span class="p"&gt;}&lt;/span&gt;
       &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt;
         &lt;span class="n"&gt;msg&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"GCloud Vertex API error {status} ({code}), {message}"&lt;/span&gt;
         &lt;span class="no"&gt;Logger&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;info&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;msg&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
         &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="ss"&gt;:error&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;msg&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;
     &lt;span class="k"&gt;end&lt;/span&gt;
   &lt;span class="k"&gt;end&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;with Elixir langchain&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight elixir"&gt;&lt;code&gt;
 &lt;span class="k"&gt;defp&lt;/span&gt; &lt;span class="n"&gt;inference&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;prompt&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;mime_type&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;content_url&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;do&lt;/span&gt;
    &lt;span class="n"&gt;config&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;%{&lt;/span&gt;
      &lt;span class="ss"&gt;model:&lt;/span&gt; &lt;span class="n"&gt;model_id&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt;
      &lt;span class="ss"&gt;endpoint:&lt;/span&gt; &lt;span class="n"&gt;vertex_endpoint&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt;
      &lt;span class="ss"&gt;api_key:&lt;/span&gt; &lt;span class="n"&gt;auth_token&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt;
      &lt;span class="ss"&gt;temperature:&lt;/span&gt; &lt;span class="mf"&gt;1.0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="ss"&gt;top_p:&lt;/span&gt; &lt;span class="mf"&gt;0.8&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="ss"&gt;top_k:&lt;/span&gt; &lt;span class="mi"&gt;10&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="ss"&gt;receive_timeout:&lt;/span&gt; &lt;span class="nv"&gt;@timeout&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;

    &lt;span class="n"&gt;content&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;%{&lt;/span&gt;
      &lt;span class="ss"&gt;mime_type:&lt;/span&gt; &lt;span class="n"&gt;mime_type&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="ss"&gt;url:&lt;/span&gt; &lt;span class="n"&gt;content_url&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;

    &lt;span class="n"&gt;vertex_ai&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;config&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;prompt&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;content&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="o"&gt;|&amp;gt;&lt;/span&gt; &lt;span class="k"&gt;case&lt;/span&gt; &lt;span class="k"&gt;do&lt;/span&gt;
      &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="ss"&gt;:ok&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;response&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt;
        &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="ss"&gt;:ok&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;response&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;

      &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="ss"&gt;:error&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;changeset&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt;
        &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="ss"&gt;:error&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;changeset&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="k"&gt;end&lt;/span&gt;
  &lt;span class="k"&gt;end&lt;/span&gt;

&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="n"&gt;vertex_ai&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;config&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;prompt&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;content&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;do&lt;/span&gt;
    &lt;span class="n"&gt;model&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="no"&gt;ChatVertexAI&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;new!&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;config&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="p"&gt;%{&lt;/span&gt;&lt;span class="ss"&gt;llm:&lt;/span&gt; &lt;span class="n"&gt;model&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="ss"&gt;verbose:&lt;/span&gt; &lt;span class="no"&gt;false&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="ss"&gt;stream:&lt;/span&gt; &lt;span class="no"&gt;false&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="o"&gt;|&amp;gt;&lt;/span&gt; &lt;span class="no"&gt;LLMChain&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;new!&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
    &lt;span class="o"&gt;|&amp;gt;&lt;/span&gt; &lt;span class="no"&gt;LLMChain&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;add_message&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
      &lt;span class="no"&gt;Message&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;new_user!&lt;/span&gt;&lt;span class="p"&gt;([&lt;/span&gt;
        &lt;span class="no"&gt;ContentPart&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;new!&lt;/span&gt;&lt;span class="p"&gt;(%{&lt;/span&gt;&lt;span class="ss"&gt;type:&lt;/span&gt; &lt;span class="ss"&gt;:text&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="ss"&gt;content:&lt;/span&gt; &lt;span class="n"&gt;prompt&lt;/span&gt;&lt;span class="p"&gt;}),&lt;/span&gt;
        &lt;span class="no"&gt;ContentPart&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;new!&lt;/span&gt;&lt;span class="p"&gt;(%{&lt;/span&gt;
          &lt;span class="ss"&gt;type:&lt;/span&gt; &lt;span class="ss"&gt;:file_url&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
          &lt;span class="ss"&gt;content:&lt;/span&gt; &lt;span class="n"&gt;content&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;url&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
          &lt;span class="ss"&gt;options:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="ss"&gt;media:&lt;/span&gt; &lt;span class="n"&gt;content&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;mime_type&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
        &lt;span class="p"&gt;})&lt;/span&gt;
      &lt;span class="p"&gt;])&lt;/span&gt;
    &lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="o"&gt;|&amp;gt;&lt;/span&gt; &lt;span class="no"&gt;LLMChain&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;run&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
    &lt;span class="o"&gt;|&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;parse_content&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
  &lt;span class="k"&gt;end&lt;/span&gt;

  &lt;span class="k"&gt;defp&lt;/span&gt; &lt;span class="n"&gt;parse_content&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
         &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="ss"&gt;:ok&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
          &lt;span class="p"&gt;%&lt;/span&gt;&lt;span class="no"&gt;LangChain&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="no"&gt;Chains&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="no"&gt;LLMChain&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="ss"&gt;llm:&lt;/span&gt; &lt;span class="n"&gt;_llm&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="ss"&gt;last_message:&lt;/span&gt; &lt;span class="p"&gt;%&lt;/span&gt;&lt;span class="no"&gt;LangChain&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="no"&gt;Message&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;
              &lt;span class="ss"&gt;content:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
                &lt;span class="p"&gt;%&lt;/span&gt;&lt;span class="no"&gt;LangChain&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="no"&gt;Message&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="no"&gt;ContentPart&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="ss"&gt;type:&lt;/span&gt; &lt;span class="ss"&gt;:text&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="ss"&gt;content:&lt;/span&gt; &lt;span class="n"&gt;content&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;
              &lt;span class="p"&gt;]&lt;/span&gt;
            &lt;span class="p"&gt;}&lt;/span&gt;
          &lt;span class="p"&gt;}}&lt;/span&gt;
       &lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;do&lt;/span&gt;

    &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="ss"&gt;:ok&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;content&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="k"&gt;end&lt;/span&gt;

  &lt;span class="k"&gt;defp&lt;/span&gt; &lt;span class="n"&gt;parse_content&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;&lt;span class="ss"&gt;:error&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;changeset&lt;/span&gt;&lt;span class="p"&gt;})&lt;/span&gt; &lt;span class="k"&gt;do&lt;/span&gt;
    &lt;span class="no"&gt;Logger&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;info&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;changeset&lt;/span&gt; &lt;span class="o"&gt;|&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;inspect&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt;
    &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="ss"&gt;:error&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s2"&gt;"Langchain encountered error !!!!"&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="k"&gt;end&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As AI continues to evolve, tools like LangChain and its Elixir counterpart, langchain, are essential for developers looking to harness the power of LLMs without getting bogged down by platform-specific quirks.&lt;/p&gt;

&lt;p&gt;Whether you're building chatbots, data analyzers, or intelligent agents, langchain empowers you to stay agile, innovative, and ahead of the curve — all while staying true to the elegance and performance of Elixir.&lt;/p&gt;

&lt;p&gt;So if you’re an Elixir developer diving into the world of AI, don’t reinvent the wheel. Let langchain do the heavy lifting, while you focus on creating something amazing with the power of Elixir.&lt;/p&gt;

</description>
      <category>elixir</category>
      <category>ai</category>
      <category>langchain</category>
      <category>gemini</category>
    </item>
    <item>
      <title>Overcoming Challenges with Google Vertex API, s6-overlay, and Fly.io: A Journey to Seamless Container Orchestration</title>
      <dc:creator>Ahsan Nabi Dar</dc:creator>
      <pubDate>Sun, 02 Feb 2025 16:36:55 +0000</pubDate>
      <link>https://forem.com/darnahsan/overcoming-challenges-with-google-vertex-api-s6-overlay-and-flyio-a-journey-to-seamless-4ij6</link>
      <guid>https://forem.com/darnahsan/overcoming-challenges-with-google-vertex-api-s6-overlay-and-flyio-a-journey-to-seamless-4ij6</guid>
      <description>&lt;p&gt;When working with Google Vertex AI's API, one of the challenges developers face is its reliance on access tokens that are valid for only 3600 seconds (or 1 hour). Unlike other APIs that support long-lived API tokens, Vertex AI requires you to generate and refresh these short-lived access tokens. This introduces additional complexity when deploying applications, especially in containerized environments where stateless operations are preferred.&lt;/p&gt;

&lt;p&gt;In this blog post, we'll explore how to integrate Google Vertex AI's authentication requirements into an Elixir application running inside a Docker container on Fly.io. We'll also discuss how to use S6-overlay for process orchestration, ensuring smooth execution of database migrations, GCloud CLI setup, and maintaining statelessness by avoiding hardcoding sensitive credentials like service account key files.&lt;/p&gt;

&lt;p&gt;To authenticate with Google Vertex AI, you need to generate an access token using the gcloud CLI. This involves setting up a service account, obtaining a JSON key file, and authenticating the CLI. In a containerized environment, this setup must be automated and integrated seamlessly into your application's lifecycle.&lt;/p&gt;

&lt;p&gt;Additionally, when deploying to platforms like Fly.io, there are specific constraints:&lt;/p&gt;

&lt;p&gt;Fly.io Deployment Flow : After successful deployment, Fly.io runs migration commands as part of the release process.&lt;br&gt;
Firecracker VMs : Fly.io containers run as lightweight virtual machines (Firecracker VMs), which can interfere with traditional container entrypoint setups like ENTRYPOINT.&lt;br&gt;
These challenges necessitate a robust solution for managing processes within the container, ensuring that:&lt;/p&gt;

&lt;p&gt;Database migrations run before the app starts.&lt;br&gt;
The gcloud CLI is properly configured during the app's lifecycle.&lt;br&gt;
The container remains stateless, avoiding hardcoded credentials.&lt;/p&gt;

&lt;p&gt;For container process orchestration, I’ve come to rely on s6-overlay. It’s a lightweight, powerful tool that allows you to manage multiple processes in a container, define dependencies between them, and ensure they start and stop in the correct order. It’s perfect for scenarios where you need to run one-off tasks (like database migrations) alongside long-running processes (like an Elixir app).&lt;/p&gt;

&lt;p&gt;In this case, I needed to set up a dependency graph like this:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;migration → Elixir app → gcloud&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Migration: A one-shot task that runs database migrations.&lt;/p&gt;

&lt;p&gt;Elixir app: The main application that runs continuously.&lt;/p&gt;

&lt;p&gt;gcloud: A one-shot task that sets up the gcloud CLI and generates the access token.&lt;/p&gt;

&lt;p&gt;Both the migration and gcloud tasks are one-shot processes—they only need to run once per container lifecycle. The Elixir app, however, is a long-running process. If gcloud fails, it only affects the function that relies on the Google Vertex API, allowing the rest of the app to continue running.&lt;/p&gt;

&lt;p&gt;Deploying to Fly.io added another layer of complexity. Fly.io uses Firecracker VMs to run containers, and this introduced an unexpected issue: s6-overlay wouldn’t start as usual. After some digging, I found a &lt;a href="https://gist.github.com/darkrain42/02fa589002afa645912d8f8d87bf55f8" rel="noopener noreferrer"&gt;GitHub gist&lt;/a&gt; where someone had faced a similar problem and shared a solution. With a few tweaks, I was able to adapt their approach to get s6-overlay working in the Fly.io environment.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ENTRYPOINT [ \
    "unshare", "--pid", "--fork", "--kill-child=SIGTERM", "--mount-proc", \
    "perl", "-e", "$SIG{INT}=''; $SIG{TERM}=''; exec @ARGV;", "--", \
    "/init" ]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;One of the key benefits of using s6-overlay was the ability to keep the container stateless. Instead of copying the gcloud keyfile into the container, I passed it as an environment variable. The s6-overlay setup script then used this variable to authenticate the gcloud CLI. This approach not only simplified the container setup but also improved security by avoiding hardcoding sensitive credentials into the container image.&lt;/p&gt;

&lt;p&gt;Here’s a high-level overview of the final setup:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Service Account Keyfile: Stored as an environment variable in Fly.io.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;s6-overlay: Used to orchestrate the processes and manage dependencies.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Migration Script: Run as a one-shot task before the Elixir app starts.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;gcloud Setup: Run as a one-shot task to authenticate and generate the access token.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Elixir App: The main application, which starts after the migration and gcloud tasks complete.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The folder structure to setup the process is as such &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1gagb7szv5u55u2a2czd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1gagb7szv5u55u2a2czd.png" alt="s6-folder-structure" width="432" height="614"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Integrating Google Vertex API into an Elixir app deployed on Fly.io was a challenging but rewarding experience. By leveraging s6-overlay for process orchestration and adapting to Fly.io’s unique environment, I was able to create a robust, stateless deployment pipeline. If you’re facing similar challenges, I hope this post provides some inspiration and guidance.&lt;/p&gt;

</description>
      <category>containers</category>
      <category>googlecloud</category>
      <category>ai</category>
      <category>aiops</category>
    </item>
    <item>
      <title>Leveraging GenServer and Queueing Techniques: Handling API Rate Limits to AI Inference services</title>
      <dc:creator>Ahsan Nabi Dar</dc:creator>
      <pubDate>Sat, 02 Nov 2024 08:38:17 +0000</pubDate>
      <link>https://forem.com/darnahsan/leveraging-genserver-and-queueing-techniques-handling-api-rate-limits-to-ai-inference-services-3i68</link>
      <guid>https://forem.com/darnahsan/leveraging-genserver-and-queueing-techniques-handling-api-rate-limits-to-ai-inference-services-3i68</guid>
      <description>&lt;p&gt;In the realm of efficient application development, managing external service rate limits is a pivotal challenge. Recently faced this task while interfacing with the Fireworks serverless API. In the world of serverless APIs, rate limits can be a significant challenge to overcome. The Fireworks AI platform, in particular, comes with a shared 600 requests per minute limit between inference and embedding functionalities. However, with the right approach, it's possible to optimize this limit to accommodate multiple users and ensure consistent response times.Fireworks provides a set of 2 API keys which means you can go upto 1200 req/min if you can successfully load balance between them. I wrote a service called ping pong to do that but we won't be discussing about Load balancing. We will be going over more exciting bit of ping pong about how to manage rate limit and queue incoming requests to not drop any request using GenServer and queue them with an acceptable timeout limit.&lt;/p&gt;

&lt;p&gt;In a typical scenario, users may submit multiple requests simultaneously, with each subsequent request consuming the available quota more quickly than the previous one. For instance, if 600 requests come in within the first ten seconds of processing, the remaining 50 seconds will see all requests being rate-limited due to their higher priority.&lt;/p&gt;

&lt;p&gt;However, we needed a solution that could efficiently process all incoming requests while maintaining fairness and preventing any potential overloading. Our approach involved utilizing GenServer and queues to manage these requests effectively.&lt;/p&gt;

&lt;p&gt;GenServer is a powerful tool in Elixir for managing stateful applications. It allows us to hold the request count state and queue up new requests, ensuring they are processed in an orderly manner without overwhelming the system.The built-in timeout mechanism within genservers ensures that if a wait becomes too protracted, older requests are dropped gracefully without spiraling out to form an uncontrollable backlog prone to overwhelm our system resources; maintaining the sanctity of fairness and balance.&lt;/p&gt;

&lt;p&gt;We decided to have a separate rate limiter GenServer process for both inference and embedding functionalities due to their different priorities. For example, if there's an inferred request coming in while an embedding request is being processed, it's essential to prioritize the more frequent requests first.&lt;/p&gt;

&lt;p&gt;In our setup, we defined a limit of 400 requests per minute for inference and 200 for embedding, knowing that inference would typically have higher frequency. This method ensures that critical or high-priority tasks receive timely attention while less demanding operations can be processed with lower priority but still within acceptable limits.&lt;/p&gt;

&lt;p&gt;When there's no requests in the queue, it’s crucial to handle this situation gracefully without waiting indefinitely. In such cases, we might see bursts of 600 or more incoming requests that do not overlap with each other. To ensure a smooth operation, retries are essential.&lt;/p&gt;

&lt;p&gt;For these instances, ElixirRetry—an amazingly powerful library for handling retries—comes into play. It provides built-in retry logic with backoff strategies, ensuring that the system can handle transient issues without being left hanging or overloaded.&lt;/p&gt;

&lt;p&gt;Here's where the bespoke service I developed Ping Pong that comes into play for balancing loads and ensuring no request drops due to rate limit constraints, with its source code available at &lt;a href="https://github.com/ahsandar/ping_pong" rel="noopener noreferrer"&gt;https://github.com/ahsandar/ping_pong&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;The rate limiter GenServer code is as below&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight elixir"&gt;&lt;code&gt;
&lt;span class="k"&gt;defmodule&lt;/span&gt; &lt;span class="no"&gt;PingPong&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="no"&gt;RateLimiter&lt;/span&gt; &lt;span class="k"&gt;do&lt;/span&gt;
  &lt;span class="kn"&gt;use&lt;/span&gt; &lt;span class="no"&gt;GenServer&lt;/span&gt;
  &lt;span class="kn"&gt;require&lt;/span&gt; &lt;span class="no"&gt;Logger&lt;/span&gt;

  &lt;span class="n"&gt;alias&lt;/span&gt; &lt;span class="no"&gt;PingPong&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="no"&gt;Utility&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="ss"&gt;as:&lt;/span&gt; &lt;span class="no"&gt;Utility&lt;/span&gt;
  &lt;span class="nv"&gt;@rate_limit_window&lt;/span&gt; &lt;span class="mi"&gt;60_000&lt;/span&gt;
  &lt;span class="nv"&gt;@safe_limit&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;

  &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="n"&gt;start_link&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;opts&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;do&lt;/span&gt;
    &lt;span class="no"&gt;GenServer&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;start_link&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="bp"&gt;__MODULE__&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;opts&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="ss"&gt;name:&lt;/span&gt; &lt;span class="n"&gt;opts&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="ss"&gt;:name&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt;
  &lt;span class="k"&gt;end&lt;/span&gt;

  &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="n"&gt;init&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;config&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;do&lt;/span&gt;
    &lt;span class="no"&gt;Cachex&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;put&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="ss"&gt;:ping_pong&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="no"&gt;Utility&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;cachex_counter&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="ss"&gt;:ok&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
     &lt;span class="p"&gt;%{&lt;/span&gt;
       &lt;span class="ss"&gt;name:&lt;/span&gt; &lt;span class="n"&gt;config&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="ss"&gt;:name&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
       &lt;span class="ss"&gt;start_time:&lt;/span&gt; &lt;span class="no"&gt;Utility&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;datetime_iso8601&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt;
       &lt;span class="ss"&gt;rate_limit:&lt;/span&gt; &lt;span class="no"&gt;String&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;to_integer&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;config&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="ss"&gt;:rate_limit&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;||&lt;/span&gt; &lt;span class="s2"&gt;"60"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
     &lt;span class="p"&gt;}}&lt;/span&gt;
  &lt;span class="k"&gt;end&lt;/span&gt;

  &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="n"&gt;handle_call&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="ss"&gt;:control_rate_limit&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;_from&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;state&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;do&lt;/span&gt;
    &lt;span class="n"&gt;sleep_time&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nv"&gt;@rate_limit_window&lt;/span&gt; &lt;span class="o"&gt;/&lt;/span&gt; &lt;span class="n"&gt;state&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;rate_limit&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;|&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;trunc&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;

    &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="n"&gt;_&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;count&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="no"&gt;Cachex&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="ss"&gt;:ping_pong&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="no"&gt;Utility&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;cachex_counter&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt;
    &lt;span class="no"&gt;Logger&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;info&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;"Count since &lt;/span&gt;&lt;span class="si"&gt;#{&lt;/span&gt;&lt;span class="n"&gt;state&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;start_time&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;: &lt;/span&gt;&lt;span class="si"&gt;#{&lt;/span&gt;&lt;span class="n"&gt;count&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;count&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="nv"&gt;@safe_limit&lt;/span&gt; &lt;span class="k"&gt;do&lt;/span&gt;
      &lt;span class="no"&gt;Logger&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;info&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
        &lt;span class="s2"&gt;"Ensuring &lt;/span&gt;&lt;span class="si"&gt;#{&lt;/span&gt;&lt;span class="n"&gt;state&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;name&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s2"&gt; rate limit at &lt;/span&gt;&lt;span class="si"&gt;#{&lt;/span&gt;&lt;span class="no"&gt;Utility&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;datetime_iso8601&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;, waiting for &lt;/span&gt;&lt;span class="si"&gt;#{&lt;/span&gt;&lt;span class="n"&gt;sleep_time&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
      &lt;span class="p"&gt;)&lt;/span&gt;

      &lt;span class="ss"&gt;:timer&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;sleep&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;sleep_time&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="k"&gt;else&lt;/span&gt;
      &lt;span class="no"&gt;Logger&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;info&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;"Request count in safe zone"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="k"&gt;end&lt;/span&gt;

    &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="n"&gt;_&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;count&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="no"&gt;Cachex&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;decr&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="ss"&gt;:ping_pong&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="no"&gt;Utility&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;cachex_counter&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt;
    &lt;span class="no"&gt;Logger&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;info&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;"Count since &lt;/span&gt;&lt;span class="si"&gt;#{&lt;/span&gt;&lt;span class="n"&gt;state&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;start_time&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;: &lt;/span&gt;&lt;span class="si"&gt;#{&lt;/span&gt;&lt;span class="n"&gt;count&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="ss"&gt;:reply&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="ss"&gt;:ok&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;state&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="k"&gt;end&lt;/span&gt;

  &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="n"&gt;queue&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;name&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;timeout&lt;/span&gt; &lt;span class="p"&gt;\\&lt;/span&gt; &lt;span class="ss"&gt;:infinity&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;do&lt;/span&gt;
    &lt;span class="no"&gt;GenServer&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;call&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;name&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="ss"&gt;:control_rate_limit&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;timeout&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
  &lt;span class="k"&gt;end&lt;/span&gt;
&lt;span class="k"&gt;end&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



</description>
      <category>elixir</category>
      <category>api</category>
      <category>serverless</category>
      <category>ai</category>
    </item>
    <item>
      <title>Pooling AMQP TLS connections in Elixir for high throughput and low latency</title>
      <dc:creator>Ahsan Nabi Dar</dc:creator>
      <pubDate>Sun, 15 Sep 2024 03:33:10 +0000</pubDate>
      <link>https://forem.com/darnahsan/pooling-amqp-tls-connections-in-elixir-for-high-throughput-and-low-latency-3pgb</link>
      <guid>https://forem.com/darnahsan/pooling-amqp-tls-connections-in-elixir-for-high-throughput-and-low-latency-3pgb</guid>
      <description>&lt;p&gt;In the fast-paced, data-driven world of computing, reliable message brokers are critical for communication between services. One of the most widely adopted protocols in this domain is the Advanced Message Queuing Protocol (AMQP). As an open standard, AMQP has seen large-scale adoption across industries for its reliability, flexibility, and ability to handle both transactional and high-throughput workloads.&lt;/p&gt;

&lt;p&gt;AMQP serves as the backbone for several popular messaging systems, most notably RabbitMQ, which has become a go-to solution for many enterprise clients. But what makes RabbitMQ shine in comparison to alternatives like Kafka, and why is it favored by a wide range of businesses, from startups to established enterprises?&lt;/p&gt;

&lt;p&gt;The AMQP protocol is designed to ensure safe, guaranteed delivery of messages with features like message acknowledgments, routing, and delivery confirmations. Enterprises with complex architectures rely on these guarantees to keep their systems resilient. AMQP allows companies to achieve:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;Transactional messaging&lt;/code&gt;: Ideal for financial systems or any environment where guaranteed message delivery is crucial.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;Flexible message routing&lt;/code&gt;: Through its exchange types (direct, topic, fanout, headers), AMQP makes it easier to handle sophisticated routing logic.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;Protocol independence&lt;/code&gt;: Since it's an open standard, AMQP works across multiple languages, platforms, and frameworks, making it adaptable to various enterprise ecosystems.&lt;/p&gt;

&lt;p&gt;This versatility has led to AMQP being the protocol of choice for major players in industries such as finance, retail, and telecommunications&lt;/p&gt;

&lt;p&gt;TCP based connections can be resource-intensive, and in high-concurrency environments, it is crucial to manage connections efficiently. Using connection pooling allows developers to reuse connections instead of creating new ones for every request. This reduces:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;Latency&lt;/code&gt;: Establishing a new connection for every task introduces delays. Connection pools maintain a set of reusable connections that can be quickly allocated to new requests.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;Resource Utilization&lt;/code&gt;: Opening too many connections can exhaust memory and CPU resources, especially in large-scale deployments. Pooling ensures that the system remains performant and avoids unnecessary overhead.&lt;/p&gt;

&lt;p&gt;By optimizing connection management, pooling helps maintain efficiency even in high-load environments, making it better suited for applications.&lt;/p&gt;

&lt;p&gt;Elixir, a modern programming language that runs on the BEAM VM, provides strong support for working with AMQP. Elixir brings:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;Concurrency&lt;/code&gt;: Leveraging the BEAM’s concurrency model, Elixir developers can easily build distributed systems that communicate over AMQP, without worrying about thread management.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;Robust Libraries&lt;/code&gt;: Libraries like &lt;code&gt;AMQP&lt;/code&gt; and &lt;code&gt;Broadway&lt;/code&gt; in Elixir allow developers to create pipelines for processing AMQP messages with ease. These libraries offer a higher level of abstraction, allowing developers to focus more on business logic rather than low-level messaging code.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;Fault-Tolerant Systems&lt;/code&gt;: Since Elixir inherits the fault-tolerance properties of BEAM, developers can build resilient, highly available systems that can recover from failure seamlessly when handling message queues.&lt;/p&gt;

&lt;p&gt;For developers building messaging systems with AMQP, Elixir provides a natural and powerful toolset for building high-performance, fault-tolerant applications that scale efficiently.&lt;/p&gt;

&lt;p&gt;Now on to the interesting part, how to implement connection pooling in elixir for &lt;code&gt;AMQP&lt;/code&gt; using &lt;code&gt;Poolboy&lt;/code&gt; for publishing messages and setting up &lt;code&gt;Broadway&lt;/code&gt; for consumption&lt;/p&gt;

&lt;p&gt;add &lt;code&gt;amqp&lt;/code&gt;, &lt;code&gt;poolboy&lt;/code&gt;, &lt;code&gt;broadway&lt;/code&gt; and &lt;code&gt;broadway_rabbitmq&lt;/code&gt; to your dependencies.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;GenServer&lt;/code&gt; to maintain connections and channel and create queues upon initialization&lt;/p&gt;

&lt;p&gt;reconnection logic used from &lt;a href="https://github.com/conduitframework/conduit_amqp/blob/master/lib/conduit_amqp/conn.ex" rel="noopener noreferrer"&gt;Conduit AMQP&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight elixir"&gt;&lt;code&gt;&lt;span class="k"&gt;defmodule&lt;/span&gt; &lt;span class="no"&gt;Maverick&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="no"&gt;Amqp&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="no"&gt;Broker&lt;/span&gt; &lt;span class="k"&gt;do&lt;/span&gt;
  &lt;span class="nv"&gt;@moduledoc&lt;/span&gt; &lt;span class="sd"&gt;"""
  Maverick.Amqp.Broker
  """&lt;/span&gt;

  &lt;span class="kn"&gt;require&lt;/span&gt; &lt;span class="no"&gt;Logger&lt;/span&gt;

  &lt;span class="kn"&gt;use&lt;/span&gt; &lt;span class="no"&gt;GenServer&lt;/span&gt;

  &lt;span class="nv"&gt;@exchange&lt;/span&gt; &lt;span class="s2"&gt;"topgun"&lt;/span&gt;

  &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="n"&gt;host&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt;
    &lt;span class="k"&gt;do&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="no"&gt;Application&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;fetch_env!&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="ss"&gt;:maverick&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="ss"&gt;:amqp_host&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

  &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="n"&gt;vhost&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt;
    &lt;span class="k"&gt;do&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="no"&gt;Application&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;fetch_env!&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="ss"&gt;:maverick&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="ss"&gt;:amqp_vhost&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

  &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="n"&gt;port&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt;
    &lt;span class="k"&gt;do&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="no"&gt;Application&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;fetch_env!&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="ss"&gt;:maverick&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="ss"&gt;:amqp_port&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

  &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="n"&gt;username&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt;
    &lt;span class="k"&gt;do&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="no"&gt;Application&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;fetch_env!&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="ss"&gt;:maverick&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="ss"&gt;:amqp_username&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

  &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="n"&gt;password&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt;
    &lt;span class="k"&gt;do&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="no"&gt;Application&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;fetch_env!&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="ss"&gt;:maverick&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="ss"&gt;:amqp_password&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

  &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="n"&gt;exchange&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt; &lt;span class="k"&gt;do&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nv"&gt;@exchange&lt;/span&gt;

  &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="n"&gt;start_link&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;_config&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;do&lt;/span&gt;
    &lt;span class="no"&gt;GenServer&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;start_link&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="bp"&gt;__MODULE__&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;[])&lt;/span&gt;
  &lt;span class="k"&gt;end&lt;/span&gt;

  &lt;span class="nv"&gt;@impl&lt;/span&gt; &lt;span class="no"&gt;true&lt;/span&gt;
  &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="n"&gt;init&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;_config&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;do&lt;/span&gt;
   &lt;span class="n"&gt;connection&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
  &lt;span class="k"&gt;end&lt;/span&gt;

  &lt;span class="nv"&gt;@impl&lt;/span&gt; &lt;span class="no"&gt;true&lt;/span&gt;
  &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="n"&gt;handle_call&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="ss"&gt;:get_channel&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;_from&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;state&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;do&lt;/span&gt;
    &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="ss"&gt;:reply&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;state&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;channel&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;state&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="k"&gt;end&lt;/span&gt;

  &lt;span class="nv"&gt;@impl&lt;/span&gt; &lt;span class="no"&gt;true&lt;/span&gt;
  &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="n"&gt;handle_call&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;&lt;span class="ss"&gt;:create_queues&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;topics&lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt; &lt;span class="n"&gt;_from&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;state&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;do&lt;/span&gt;
    &lt;span class="n"&gt;queues&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;state&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;channel&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;topics&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nv"&gt;@exchange&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
  &lt;span class="k"&gt;end&lt;/span&gt;

  &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="n"&gt;handle_info&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="ss"&gt;:reconnect&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;_from&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;state&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;do&lt;/span&gt;
  &lt;span class="k"&gt;case&lt;/span&gt; &lt;span class="n"&gt;connection&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="k"&gt;do&lt;/span&gt;
      &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="ss"&gt;:ok&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;%{&lt;/span&gt;&lt;span class="ss"&gt;connection:&lt;/span&gt; &lt;span class="n"&gt;conn&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="ss"&gt;channel:&lt;/span&gt; &lt;span class="n"&gt;chan&lt;/span&gt;&lt;span class="p"&gt;}}&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt;
        &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="ss"&gt;:noreply&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;%{&lt;/span&gt;&lt;span class="ss"&gt;connection:&lt;/span&gt; &lt;span class="n"&gt;conn&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="ss"&gt;channel:&lt;/span&gt; &lt;span class="n"&gt;chan&lt;/span&gt;&lt;span class="p"&gt;}}&lt;/span&gt;
      &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="ss"&gt;:error&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;_&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt;
        &lt;span class="c1"&gt;# Schedule a retry with a delay&lt;/span&gt;
        &lt;span class="no"&gt;Process&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;send_after&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt; &lt;span class="ss"&gt;:reconnect&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;5000&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="ss"&gt;:noreply&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;state&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="k"&gt;end&lt;/span&gt; 
  &lt;span class="k"&gt;end&lt;/span&gt;

  &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="n"&gt;handle_info&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;&lt;span class="ss"&gt;:DOWN&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;_ref&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="ss"&gt;:process&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;_pid&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;_reason&lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt; &lt;span class="n"&gt;state&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;do&lt;/span&gt;
      &lt;span class="no"&gt;Process&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;send_after&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt; &lt;span class="ss"&gt;:reconnect&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;1000&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
      &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="ss"&gt;:noreply&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;state&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="k"&gt;end&lt;/span&gt;

  &lt;span class="nv"&gt;@impl&lt;/span&gt; &lt;span class="no"&gt;true&lt;/span&gt;
  &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="n"&gt;terminate&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;reason&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;%{&lt;/span&gt;&lt;span class="ss"&gt;connection:&lt;/span&gt; &lt;span class="n"&gt;connection&lt;/span&gt;&lt;span class="p"&gt;})&lt;/span&gt; &lt;span class="k"&gt;do&lt;/span&gt;
    &lt;span class="no"&gt;Logger&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;info&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;"AMQP Broker termintated: &lt;/span&gt;&lt;span class="si"&gt;#{&lt;/span&gt;&lt;span class="n"&gt;reason&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="no"&gt;AMQP&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="no"&gt;Connection&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;close&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;connection&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
  &lt;span class="k"&gt;end&lt;/span&gt;

  &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="n"&gt;channel&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;worker&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;do&lt;/span&gt;
    &lt;span class="no"&gt;GenServer&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;call&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;worker&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="ss"&gt;:get_channel&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
  &lt;span class="k"&gt;end&lt;/span&gt;

  &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="n"&gt;create_queues&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;queues&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;do&lt;/span&gt;
    &lt;span class="no"&gt;GenServer&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;call&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="bp"&gt;__MODULE__&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="ss"&gt;:create_queues&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;queues&lt;/span&gt;&lt;span class="p"&gt;})&lt;/span&gt;
  &lt;span class="k"&gt;end&lt;/span&gt;

  &lt;span class="k"&gt;defp&lt;/span&gt; &lt;span class="n"&gt;connection&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="k"&gt;do&lt;/span&gt;
    &lt;span class="k"&gt;case&lt;/span&gt; &lt;span class="n"&gt;connect&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="k"&gt;do&lt;/span&gt;
    &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="ss"&gt;:ok&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;conn&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;  &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; 
      &lt;span class="no"&gt;Process&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;monitor&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;conn&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;pid&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
      &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="ss"&gt;:ok&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;chan&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;create_channel&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;conn&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
      &lt;span class="ss"&gt;:ok&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;exchange&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;chan&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
      &lt;span class="ss"&gt;:ok&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;queues&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;chan&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="no"&gt;Maverick&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="no"&gt;Queue&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="no"&gt;Gateway&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;topics&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt;
      &lt;span class="c1"&gt;# Keep channel open for the GenServer lifecycle&lt;/span&gt;
      &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="ss"&gt;:ok&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;%{&lt;/span&gt;&lt;span class="ss"&gt;connection:&lt;/span&gt; &lt;span class="n"&gt;conn&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="ss"&gt;channel:&lt;/span&gt; &lt;span class="n"&gt;chan&lt;/span&gt;&lt;span class="p"&gt;}}&lt;/span&gt;
    &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="ss"&gt;:error&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;reason&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="ss"&gt;:error&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;reason&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="k"&gt;end&lt;/span&gt;

  &lt;span class="k"&gt;end&lt;/span&gt;

  &lt;span class="k"&gt;defp&lt;/span&gt; &lt;span class="n"&gt;connect&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="k"&gt;do&lt;/span&gt;
    &lt;span class="no"&gt;AMQP&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="no"&gt;Connection&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;open&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
      &lt;span class="ss"&gt;username:&lt;/span&gt; &lt;span class="n"&gt;username&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt;
      &lt;span class="ss"&gt;password:&lt;/span&gt; &lt;span class="n"&gt;password&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt;
      &lt;span class="ss"&gt;virtual_host:&lt;/span&gt; &lt;span class="n"&gt;vhost&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt;
      &lt;span class="ss"&gt;host:&lt;/span&gt; &lt;span class="n"&gt;host&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt;
      &lt;span class="ss"&gt;port:&lt;/span&gt; &lt;span class="n"&gt;port&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt;
      &lt;span class="ss"&gt;ssl_options:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
        &lt;span class="ss"&gt;verify:&lt;/span&gt; &lt;span class="ss"&gt;:verify_peer&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="ss"&gt;customize_hostname_check:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
          &lt;span class="ss"&gt;match_fun:&lt;/span&gt; &lt;span class="ss"&gt;:public_key&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;pkix_verify_hostname_match_fun&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="ss"&gt;:https&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="p"&gt;],&lt;/span&gt;
        &lt;span class="c1"&gt;# from CAStore package&lt;/span&gt;
        &lt;span class="ss"&gt;cacertfile:&lt;/span&gt; &lt;span class="no"&gt;CAStore&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;file_path&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
      &lt;span class="p"&gt;]&lt;/span&gt;
    &lt;span class="p"&gt;)&lt;/span&gt;
  &lt;span class="k"&gt;end&lt;/span&gt;

  &lt;span class="k"&gt;defp&lt;/span&gt; &lt;span class="n"&gt;create_channel&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;conn&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;do&lt;/span&gt;
    &lt;span class="no"&gt;AMQP&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="no"&gt;Channel&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;open&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;conn&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
  &lt;span class="k"&gt;end&lt;/span&gt;

  &lt;span class="k"&gt;defp&lt;/span&gt; &lt;span class="n"&gt;exchange&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;chan&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;exchange&lt;/span&gt; &lt;span class="p"&gt;\\&lt;/span&gt; &lt;span class="nv"&gt;@exchange&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;do&lt;/span&gt;
    &lt;span class="no"&gt;AMQP&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="no"&gt;Exchange&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;declare&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;chan&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;exchange&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
  &lt;span class="k"&gt;end&lt;/span&gt;

  &lt;span class="k"&gt;defp&lt;/span&gt; &lt;span class="n"&gt;queues&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;chan&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;topics&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;exchange&lt;/span&gt; &lt;span class="p"&gt;\\&lt;/span&gt; &lt;span class="nv"&gt;@exchange&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;do&lt;/span&gt;
    &lt;span class="no"&gt;Enum&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;each&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;topics&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="k"&gt;fn&lt;/span&gt; &lt;span class="n"&gt;topic&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt;
      &lt;span class="no"&gt;AMQP&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="no"&gt;Queue&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;declare&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;chan&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;topic&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="ss"&gt;durable:&lt;/span&gt; &lt;span class="no"&gt;true&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
      &lt;span class="no"&gt;AMQP&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="no"&gt;Queue&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;bind&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;chan&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;topic&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;exchange&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="ss"&gt;routing_key:&lt;/span&gt; &lt;span class="n"&gt;topic&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="k"&gt;end&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
  &lt;span class="k"&gt;end&lt;/span&gt;
&lt;span class="k"&gt;end&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;AMQP client implementing message publishing&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight elixir"&gt;&lt;code&gt;&lt;span class="k"&gt;defmodule&lt;/span&gt; &lt;span class="no"&gt;Maverick&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="no"&gt;Amqp&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="no"&gt;Client&lt;/span&gt; &lt;span class="k"&gt;do&lt;/span&gt;


  &lt;span class="kn"&gt;use&lt;/span&gt; &lt;span class="no"&gt;Retry&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="no"&gt;Annotation&lt;/span&gt;
  &lt;span class="kn"&gt;use&lt;/span&gt; &lt;span class="no"&gt;Appsignal&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="no"&gt;Instrumentation&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="no"&gt;Decorators&lt;/span&gt;

  &lt;span class="n"&gt;alias&lt;/span&gt; &lt;span class="no"&gt;Maverick&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="no"&gt;Amqp&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="no"&gt;Broker&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="ss"&gt;as:&lt;/span&gt; &lt;span class="no"&gt;AmqpBroker&lt;/span&gt;

  &lt;span class="nv"&gt;@timeout&lt;/span&gt; &lt;span class="mi"&gt;60_000&lt;/span&gt;

  &lt;span class="nv"&gt;@decorate&lt;/span&gt; &lt;span class="n"&gt;transaction_event&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
  &lt;span class="nv"&gt;@retry&lt;/span&gt; &lt;span class="ss"&gt;with:&lt;/span&gt; &lt;span class="n"&gt;exponential_backoff&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;|&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;randomize&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;|&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;expiry&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;10_000&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
  &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="n"&gt;publish&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;message&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;topic&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;do&lt;/span&gt;
    &lt;span class="ss"&gt;:poolboy&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;transaction&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
      &lt;span class="ss"&gt;:amqp_worker&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="k"&gt;fn&lt;/span&gt; &lt;span class="n"&gt;worker&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt;
        &lt;span class="no"&gt;AMQP&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="no"&gt;Basic&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;publish&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="no"&gt;AmqpBroker&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;channel&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;worker&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt; &lt;span class="no"&gt;AmqpBroker&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;exchange&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt; &lt;span class="n"&gt;topic&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;message&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
      &lt;span class="k"&gt;end&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="nv"&gt;@timeout&lt;/span&gt;
    &lt;span class="p"&gt;)&lt;/span&gt;
  &lt;span class="k"&gt;end&lt;/span&gt;
&lt;span class="k"&gt;end&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;poolboy config to start a pool of connections&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight elixir"&gt;&lt;code&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="no"&gt;Config&lt;/span&gt;

&lt;span class="n"&gt;config&lt;/span&gt; &lt;span class="ss"&gt;:maverick&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="ss"&gt;:amqp_worker&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="ss"&gt;pool_size:&lt;/span&gt; &lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="ss"&gt;max_overflow:&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight elixir"&gt;&lt;code&gt; &lt;span class="k"&gt;defp&lt;/span&gt; &lt;span class="n"&gt;amqp_poolboy_config&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="k"&gt;do&lt;/span&gt;
    &lt;span class="p"&gt;[&lt;/span&gt;
      &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="ss"&gt;:name&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="ss"&gt;:local&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="ss"&gt;:amqp_worker&lt;/span&gt;&lt;span class="p"&gt;}},&lt;/span&gt;
      &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="ss"&gt;:worker_module&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="no"&gt;Maverick&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="no"&gt;Amqp&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="no"&gt;Broker&lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;
      &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="ss"&gt;:size&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="no"&gt;Application&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;get_env&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="ss"&gt;:maverick&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="ss"&gt;:amqp_worker&lt;/span&gt;&lt;span class="p"&gt;)[&lt;/span&gt;&lt;span class="ss"&gt;:pool_size&lt;/span&gt;&lt;span class="p"&gt;]},&lt;/span&gt;
      &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="ss"&gt;:max_overflow&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="no"&gt;Application&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;get_env&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="ss"&gt;:maverick&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="ss"&gt;:amqp_worker&lt;/span&gt;&lt;span class="p"&gt;)[&lt;/span&gt;&lt;span class="ss"&gt;:max_overflow&lt;/span&gt;&lt;span class="p"&gt;]}&lt;/span&gt;
    &lt;span class="p"&gt;]&lt;/span&gt;
  &lt;span class="k"&gt;end&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Add poolboy to the Supervisor&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight elixir"&gt;&lt;code&gt;&lt;span class="n"&gt;children&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
&lt;span class="ss"&gt;:poolboy&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;child_spec&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="ss"&gt;:amqp_worker&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;amqp_poolboy_config&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt;
&lt;span class="p"&gt;]&lt;/span&gt;

 &lt;span class="n"&gt;opts&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="ss"&gt;strategy:&lt;/span&gt; &lt;span class="ss"&gt;:one_for_one&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="ss"&gt;name:&lt;/span&gt; &lt;span class="no"&gt;Maverick&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="no"&gt;Supervisor&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
    &lt;span class="no"&gt;Supervisor&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;start_link&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;children&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;opts&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;Broadway&lt;/code&gt; to consume messages&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight elixir"&gt;&lt;code&gt;&lt;span class="k"&gt;defmodule&lt;/span&gt; &lt;span class="no"&gt;Maverick&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="no"&gt;Broadway&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="no"&gt;Amqp&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="no"&gt;Whatsapp&lt;/span&gt; &lt;span class="k"&gt;do&lt;/span&gt;
  &lt;span class="nv"&gt;@moduledoc&lt;/span&gt; &lt;span class="sd"&gt;"""
  Maverick.Broadway.Amqp.Whatsapp
  """&lt;/span&gt;
  &lt;span class="kn"&gt;use&lt;/span&gt; &lt;span class="no"&gt;Broadway&lt;/span&gt;

  &lt;span class="kn"&gt;use&lt;/span&gt; &lt;span class="no"&gt;Appsignal&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="no"&gt;Instrumentation&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="no"&gt;Decorators&lt;/span&gt;

  &lt;span class="n"&gt;alias&lt;/span&gt; &lt;span class="no"&gt;Maverick&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="no"&gt;Amqp&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="no"&gt;Broker&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="ss"&gt;as:&lt;/span&gt; &lt;span class="no"&gt;AmqpBroker&lt;/span&gt;
  &lt;span class="n"&gt;alias&lt;/span&gt; &lt;span class="no"&gt;Maverick&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="no"&gt;Whatsapp&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="no"&gt;Message&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="ss"&gt;as:&lt;/span&gt; &lt;span class="no"&gt;WhatsappMessage&lt;/span&gt;

  &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="n"&gt;producer_concurrency&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt;
    &lt;span class="k"&gt;do&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="no"&gt;Application&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;fetch_env!&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="ss"&gt;:maverick&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="ss"&gt;:broadway_producer_concurrency&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

  &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="n"&gt;processors_concurrency&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt;
    &lt;span class="k"&gt;do&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="no"&gt;Application&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;fetch_env!&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="ss"&gt;:maverick&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="ss"&gt;:broadway_processor_concurrency&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

  &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="n"&gt;group&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt; &lt;span class="k"&gt;do&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="no"&gt;Application&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;fetch_env!&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="ss"&gt;:maverick&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="ss"&gt;:broadway_client_prefix&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

  &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="n"&gt;start_link&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;_opts&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;do&lt;/span&gt;
    &lt;span class="no"&gt;Broadway&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;start_link&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="bp"&gt;__MODULE__&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="ss"&gt;name:&lt;/span&gt; &lt;span class="bp"&gt;__MODULE__&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="ss"&gt;producer:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
        &lt;span class="ss"&gt;module:&lt;/span&gt;
          &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="no"&gt;BroadwayRabbitMQ&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="no"&gt;Producer&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
           &lt;span class="ss"&gt;queue:&lt;/span&gt; &lt;span class="no"&gt;Application&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;get_env&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="ss"&gt;:maverick&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="ss"&gt;:whatsapp_topic&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
           &lt;span class="ss"&gt;on_success:&lt;/span&gt; &lt;span class="ss"&gt;:ack&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
           &lt;span class="ss"&gt;on_failure:&lt;/span&gt; &lt;span class="ss"&gt;:reject_and_requeue_once&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
           &lt;span class="ss"&gt;declare:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="ss"&gt;durable:&lt;/span&gt; &lt;span class="no"&gt;true&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
           &lt;span class="ss"&gt;connection:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
             &lt;span class="ss"&gt;username:&lt;/span&gt; &lt;span class="no"&gt;AmqpBroker&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;username&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt;
             &lt;span class="ss"&gt;password:&lt;/span&gt; &lt;span class="no"&gt;AmqpBroker&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;password&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt;
             &lt;span class="ss"&gt;virtual_host:&lt;/span&gt; &lt;span class="no"&gt;AmqpBroker&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;vhost&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt;
             &lt;span class="ss"&gt;host:&lt;/span&gt; &lt;span class="no"&gt;AmqpBroker&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;host&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt;
             &lt;span class="ss"&gt;port:&lt;/span&gt; &lt;span class="no"&gt;AmqpBroker&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;port&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt;
             &lt;span class="ss"&gt;ssl_options:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
               &lt;span class="ss"&gt;verify:&lt;/span&gt; &lt;span class="ss"&gt;:verify_peer&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
               &lt;span class="ss"&gt;customize_hostname_check:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
                 &lt;span class="ss"&gt;match_fun:&lt;/span&gt; &lt;span class="ss"&gt;:public_key&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;pkix_verify_hostname_match_fun&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="ss"&gt;:https&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
               &lt;span class="p"&gt;],&lt;/span&gt;
               &lt;span class="c1"&gt;# from CAStore package&lt;/span&gt;
               &lt;span class="ss"&gt;cacertfile:&lt;/span&gt; &lt;span class="no"&gt;CAStore&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;file_path&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
             &lt;span class="p"&gt;]&lt;/span&gt;
           &lt;span class="p"&gt;],&lt;/span&gt;
           &lt;span class="ss"&gt;qos:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
             &lt;span class="ss"&gt;prefetch_count:&lt;/span&gt; &lt;span class="mi"&gt;50&lt;/span&gt;
           &lt;span class="p"&gt;]},&lt;/span&gt;
        &lt;span class="ss"&gt;concurrency:&lt;/span&gt; &lt;span class="n"&gt;producer_concurrency&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
      &lt;span class="p"&gt;],&lt;/span&gt;
      &lt;span class="ss"&gt;processors:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
        &lt;span class="ss"&gt;default:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
          &lt;span class="ss"&gt;concurrency:&lt;/span&gt; &lt;span class="n"&gt;processors_concurrency&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
        &lt;span class="p"&gt;]&lt;/span&gt;
      &lt;span class="p"&gt;]&lt;/span&gt;
    &lt;span class="p"&gt;)&lt;/span&gt;
  &lt;span class="k"&gt;end&lt;/span&gt;

  &lt;span class="nv"&gt;@decorate&lt;/span&gt; &lt;span class="n"&gt;transaction&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="ss"&gt;:broadway_amqp_whatsapp_message&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
  &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="n"&gt;handle_message&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;_&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;message&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;_&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;do&lt;/span&gt;
    &lt;span class="no"&gt;IO&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;inspect&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;message&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="ss"&gt;label:&lt;/span&gt; &lt;span class="s2"&gt;"Got Whatsapp message"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;message&lt;/span&gt;
  &lt;span class="k"&gt;end&lt;/span&gt;
&lt;span class="k"&gt;end&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;AMQP’s adoption continues to grow as more enterprises recognize the need for reliable, transactional messaging solutions. AMQP, with its foundation, shines in environments where reliability, message routing flexibility, and ease of use are critical. Resilient platform, especially when paired with languages like Elixir, which provides native support for AMQP-based systems provides stability and robustness.&lt;/p&gt;

&lt;p&gt;For those looking to build robust, scalable messaging systems, AMQP’s advantages over Kafka in transactional and low-latency messaging make it a strong contender in the world of message brokers. As industries increasingly move toward distributed architectures, AMQP's proven reliability and performance in business critical environments ensure its continued success.&lt;/p&gt;

</description>
      <category>elixir</category>
      <category>eventdriven</category>
      <category>amqp</category>
      <category>pooling</category>
    </item>
    <item>
      <title>Simplifying Webhook Handling with Vector.dev: A Modern Solution for Serverless Apps</title>
      <dc:creator>Ahsan Nabi Dar</dc:creator>
      <pubDate>Thu, 05 Sep 2024 15:01:50 +0000</pubDate>
      <link>https://forem.com/darnahsan/simplifying-webhook-handling-with-vectordev-a-modern-solution-for-serverless-apps-19b4</link>
      <guid>https://forem.com/darnahsan/simplifying-webhook-handling-with-vectordev-a-modern-solution-for-serverless-apps-19b4</guid>
      <description>&lt;p&gt;Simplifying Webhook Handling with Vector.dev: A Modern Solution for Serverless Apps&lt;br&gt;
In the world of serverless applications and third-party services, handling incoming data efficiently is critical. Webhooks are one common mechanism for delivering this data, where external systems push data to your service in real time. However, building and maintaining your own webhook service can quickly turn into a case of over-engineering, especially when you need to integrate multiple downstream or upstream systems. This is where Vector.dev comes into play.&lt;/p&gt;

&lt;p&gt;Vector is a high-performance observability and data pipeline tool built in Rust, designed to handle large-scale data processing efficiently. It comes packed with a wide variety of sinks, allowing you to seamlessly push data to many destinations, all through configuration rather than writing custom code. With Vector, setting up webhook handling becomes a simple, yet powerful, solution for routing, transforming, and managing data from different sources.&lt;/p&gt;

&lt;p&gt;Why Vector for Webhook Handling?&lt;br&gt;
There are several reasons why Vector is a great fit for managing webhook events:&lt;/p&gt;

&lt;p&gt;No Over-Engineering: You can avoid building a custom webhook service from scratch. With Vector’s pre-built sinks and powerful configurations, you can route incoming webhook data to a variety of destinations—queues, databases, object storage, and more—without extra coding.&lt;/p&gt;

&lt;p&gt;Wide Range of Sinks: Vector supports a large ecosystem of sinks. Whether you're publishing events to Apache Kafka for real-time stream processing or backing up the data to Amazon S3 for long-term storage, Vector has you covered. And the best part? It’s all done through configuration.&lt;/p&gt;

&lt;p&gt;Data Transformation with VRL: The Vector Remap Language (VRL) is a built-in language that allows you to remap, filter, and transform incoming webhook events. You can easily modify the structure of the incoming payload, apply filtering rules, and send the transformed data to the desired destination.&lt;/p&gt;

&lt;p&gt;Multi-Destination Routing: Vector supports publishing the same event to multiple destinations simultaneously. For instance, you could send the incoming webhook event to Kafka for real-time processing and store a backup copy of the same event in S3—ensuring you have both live and persistent copies of the data.&lt;/p&gt;

&lt;p&gt;Buffering and Retry Mechanisms: One of Vector’s most useful features is its built-in buffering and retry capabilities. This ensures that even if a sink becomes temporarily unavailable, Vector will hold onto the data and retry sending it, maintaining the reliability of your pipeline.&lt;/p&gt;

&lt;p&gt;Scalability and Horizontal Scaling: Vector’s architecture is built for performance. It uses Rust for high efficiency and can scale horizontally to meet your application's growing demands. You can also connect multiple Vector nodes to pass data between them, allowing you to build resilient, fail-safe pipelines that handle data efficiently at any scale.&lt;/p&gt;

&lt;p&gt;To setup webhook &lt;code&gt;http_server&lt;/code&gt; source needs to be configured and it can have authentication added to ensure security.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight toml"&gt;&lt;code&gt;&lt;span class="nn"&gt;[sources.webhook]&lt;/span&gt;
  &lt;span class="py"&gt;type&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"http_server"&lt;/span&gt;
  &lt;span class="py"&gt;address&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"0.0.0.0:8080"&lt;/span&gt;
  &lt;span class="py"&gt;method&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"PUT"&lt;/span&gt;
  &lt;span class="py"&gt;path&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"/webh00k"&lt;/span&gt;
  &lt;span class="py"&gt;path_key&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"webhook"&lt;/span&gt;
&lt;span class="nn"&gt;[sources.webhook.auth]&lt;/span&gt;
  &lt;span class="py"&gt;username&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"${BASIC_AUTH_USERNAME}"&lt;/span&gt;
  &lt;span class="py"&gt;password&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"${BASIC_AUTH_PASSWORD}"&lt;/span&gt;
&lt;span class="nn"&gt;[sources.webhook.decoding]&lt;/span&gt;
  &lt;span class="py"&gt;codec&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"json"&lt;/span&gt;
&lt;span class="nn"&gt;[sources.webhook.framing]&lt;/span&gt;
  &lt;span class="py"&gt;method&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"newline_delimited"&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once events are received by the source events can be routed to different sinks as separate streams using the route transform as such&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight toml"&gt;&lt;code&gt;&lt;span class="nn"&gt;[transforms.condition]&lt;/span&gt;
  &lt;span class="py"&gt;type&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"route"&lt;/span&gt;
  &lt;span class="py"&gt;inputs&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt; &lt;span class="s"&gt;"webhook"&lt;/span&gt; &lt;span class="p"&gt;]&lt;/span&gt;
&lt;span class="nn"&gt;[transforms.condition.route]&lt;/span&gt;
 &lt;span class="py"&gt;log_webhook&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;'exists(.event) &amp;amp;&amp;amp; .event == "log"'&lt;/span&gt;
 &lt;span class="py"&gt;event_webhook&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;'exists(.event) &amp;amp;&amp;amp; .event == "event"'&lt;/span&gt;
 &lt;span class="py"&gt;metric_webhook&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;'exists(.event) &amp;amp;&amp;amp; .event == "metric"'&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The important point to remember is if you are producing your own data make sure you tag all your events well.&lt;/p&gt;

&lt;p&gt;After which events can be sent over to any sinks of choice. ;)&lt;/p&gt;

&lt;p&gt;Using Vector to handle webhooks simplifies what would otherwise be a complex and time-consuming task. By leveraging Vector’s pre-built sinks, VRL for transformation, and powerful features like buffering, retries, and horizontal scaling, you can build robust, fail-safe pipelines that scale with your application.&lt;/p&gt;

&lt;p&gt;With minimal setup, Vector allows you to focus on what matters: delivering value, instead of getting bogged down by infrastructure concerns. Whether you're managing incoming data from third-party services or building a scalable serverless app, Vector.dev is a modern, efficient solution that reduces overhead and improves reliability.&lt;/p&gt;

</description>
      <category>webhook</category>
      <category>vector</category>
      <category>serverless</category>
      <category>distributedsystems</category>
    </item>
    <item>
      <title>Multi-Layered Caching with Decorators in Elixir: Optimizing Performance and Scalability</title>
      <dc:creator>Ahsan Nabi Dar</dc:creator>
      <pubDate>Tue, 18 Jun 2024 07:30:18 +0000</pubDate>
      <link>https://forem.com/darnahsan/multi-layered-caching-with-decorators-in-elixir-optimizing-performance-and-scalability-3gd7</link>
      <guid>https://forem.com/darnahsan/multi-layered-caching-with-decorators-in-elixir-optimizing-performance-and-scalability-3gd7</guid>
      <description>&lt;p&gt;Phil Karlton's famous quote aptly captures the essence of caching in computer science: "There are only two hard things in Computer Science: cache invalidation and naming things." Caching is a powerful technique to improve application performance by storing frequently accessed data in a faster, more readily accessible location. However, effective cache invalidation remains a challenge.&lt;/p&gt;

&lt;p&gt;Caching is a difficult problem and invalidating a cache is even more difficult. Elixir, with its powerful in-memory caching options such as ETS and DETS, provides a robust solution that often eliminates the need for external caching systems like Memcached or Redis. For a deep dive, &lt;a href="https://blog.appsignal.com/2019/11/12/caching-with-elixir-and-ets.html"&gt;Dashbit’s blog post&lt;/a&gt; on why you may not need Redis with Elixir is an excellent resource. But for globally distributed, dynamically scaling serverless environments, these in-memory caches become less suitable.&lt;/p&gt;

&lt;p&gt;However, scaling caching across multiple global regions in a serverless environment introduces new challenges. Dynamic scaling of nodes means that your in-memory cache can disappear along with the node, complicating matters.&lt;/p&gt;

&lt;p&gt;Cachex, the most popular Elixir caching library, offers a clustered cache option but lacks support for dynamic node configuration. This limitation becomes evident in environments like Fly.io, where nodes scale dynamically and their addresses aren’t known at startup. Engaging with the Fly.io community led me to an &lt;a href="https://github.com/whitfin/cachex/issues/246"&gt;open issue&lt;/a&gt; on Cachex’s GitHub regarding dynamic node configuration. This search introduced me to &lt;a href="https://github.com/cabol/nebulex"&gt;Nebulex&lt;/a&gt;, a feature-rich library supporting multiple cache stores, including Cachex and Redis.&lt;/p&gt;

&lt;p&gt;Nebulex also supports various caching patterns out of the box. While setting up the Redis adapter for Nebulex, I encountered a blocker &lt;a href="https://github.com/cabol/nebulex_redis_adapter/issues/44"&gt;issue&lt;/a&gt; with Upstash Redis, revealing another limitation. Despite this, the exploration provided insights on constructing a layered caching solution using decorators a approach Nebulex uses.&lt;/p&gt;

&lt;p&gt;With the limitations of Cachex in dynamic environments and the Redis adapter for Nebulex not meeting expectations, Inspired by the strengths of Cachex and Nebulex, we can create a custom layered caching solution using decorators in Elixir. This approach combines the speed of a local Cachex as L1 (short TTL)  with the scalability of an external Redis as L2 (longer TTL)), reducing code clutter and improving maintainability. This setup avoids cluttering the code with numerous get, set, and delete operations by using decorators.&lt;/p&gt;

&lt;p&gt;Advantages of Multi-Layered Caching&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Faster Responses: Cachex (L1) provides microsecond latency for frequently accessed data.&lt;/li&gt;
&lt;li&gt;Reduced External Hits: Redis (L2) serves as a fallback, significantly reducing the number of requests to external services.&lt;/li&gt;
&lt;li&gt;Cost Efficiency: Fewer requests to Redis minimize cloud costs.&lt;/li&gt;
&lt;li&gt;Graceful Degradation: The system can serve stale data from L1 within a threshold, ensuring continuity while fetching fresh data from L2.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Here's an outline of the approach:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Define the Decorator: Create a decorator function that accepts the actual function to be wrapped and the cache configuration options (e.g., cache type, TTLs for L1 and L2).&lt;/li&gt;
&lt;li&gt;Check L1 Cache: Inside the decorator, first check the L1 cache (Cachex) for the requested data using the key derived from the function arguments.&lt;/li&gt;
&lt;li&gt;Retrieve from L2 Cache: If the data is not found in L1, retrieve it from the L2 cache (Redis) using the same key.&lt;/li&gt;
&lt;li&gt;Fetch from Source: If both L1 and L2 caches miss, call the wrapped function to fetch the data from the original source.&lt;/li&gt;
&lt;li&gt;Cache the Result: Store the fetched data in both L1 and L2 caches with their respective TTLs.&lt;/li&gt;
&lt;li&gt;Return the Data: Finally, return the retrieved or fetched data.&lt;/li&gt;
&lt;li&gt;By wrapping functions with this decorator, you can transparently introduce caching without altering the core logic of your application.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Using Elixir's &lt;a href="https://github.com/arjan/decorator"&gt;decorator&lt;/a&gt; library it as simple as writing it as such&lt;/p&gt;

&lt;p&gt;&lt;code&gt;Decorator&lt;/code&gt; class allowing you to decorate your functions as &lt;br&gt;
 &lt;code&gt;@decorate lx_fetch(["key", args0])&lt;/code&gt;&lt;br&gt;
&lt;code&gt;@decorate lx_evict(["key", args0])&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight elixir"&gt;&lt;code&gt;&lt;span class="k"&gt;defmodule&lt;/span&gt; &lt;span class="no"&gt;Maverick&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="no"&gt;Cache&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="no"&gt;Decorator&lt;/span&gt; &lt;span class="k"&gt;do&lt;/span&gt;
  &lt;span class="nv"&gt;@moduledoc&lt;/span&gt; &lt;span class="sd"&gt;"""
  Maverick.Cache.Decorator
  """&lt;/span&gt;

  &lt;span class="kn"&gt;use&lt;/span&gt; &lt;span class="no"&gt;Decorator&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="no"&gt;Define&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="ss"&gt;lx_fetch:&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="ss"&gt;lx_evict:&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;

  &lt;span class="n"&gt;alias&lt;/span&gt; &lt;span class="no"&gt;Maverick&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="no"&gt;Cache&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="no"&gt;L1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="ss"&gt;as:&lt;/span&gt; &lt;span class="no"&gt;CacheL1&lt;/span&gt;
  &lt;span class="n"&gt;alias&lt;/span&gt; &lt;span class="no"&gt;Maverick&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="no"&gt;Cache&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="no"&gt;L2&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="ss"&gt;as:&lt;/span&gt; &lt;span class="no"&gt;CacheL2&lt;/span&gt;

  &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="n"&gt;lx_fetch&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;attrs&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;body&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;_context&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;do&lt;/span&gt;
    &lt;span class="kn"&gt;quote&lt;/span&gt; &lt;span class="k"&gt;do&lt;/span&gt;
      &lt;span class="k"&gt;case&lt;/span&gt; &lt;span class="no"&gt;CacheL1&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kn"&gt;unquote&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;attrs&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt; &lt;span class="k"&gt;do&lt;/span&gt;
        &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="ss"&gt;:ok&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="no"&gt;nil&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt;
          &lt;span class="k"&gt;case&lt;/span&gt; &lt;span class="no"&gt;CacheL2&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kn"&gt;unquote&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;attrs&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt; &lt;span class="k"&gt;do&lt;/span&gt;
            &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="ss"&gt;:ok&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="no"&gt;nil&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt;
              &lt;span class="kn"&gt;unquote&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;body&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

            &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="ss"&gt;:ok&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;data&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt;
              &lt;span class="no"&gt;CacheL1&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;set&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kn"&gt;unquote&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;attrs&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt; &lt;span class="n"&gt;data&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
              &lt;span class="n"&gt;data&lt;/span&gt;

            &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="ss"&gt;:error&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;_reason&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt;
              &lt;span class="kn"&gt;unquote&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;body&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
          &lt;span class="k"&gt;end&lt;/span&gt;

        &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="ss"&gt;:ok&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;data&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt;
          &lt;span class="n"&gt;data&lt;/span&gt;

        &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="ss"&gt;:error&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;_reason&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt;
          &lt;span class="kn"&gt;unquote&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;body&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
      &lt;span class="k"&gt;end&lt;/span&gt;
    &lt;span class="k"&gt;end&lt;/span&gt;
  &lt;span class="k"&gt;end&lt;/span&gt;

  &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="n"&gt;lx_evict&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;attrs&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;body&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;_context&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;do&lt;/span&gt;
    &lt;span class="kn"&gt;quote&lt;/span&gt; &lt;span class="k"&gt;do&lt;/span&gt;
      &lt;span class="no"&gt;CacheL1&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;del&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kn"&gt;unquote&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;attrs&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
      &lt;span class="no"&gt;CacheL2&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;del&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kn"&gt;unquote&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;attrs&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
      &lt;span class="kn"&gt;unquote&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;body&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="k"&gt;end&lt;/span&gt;
  &lt;span class="k"&gt;end&lt;/span&gt;
&lt;span class="k"&gt;end&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;L1 cache&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight elixir"&gt;&lt;code&gt;&lt;span class="k"&gt;defmodule&lt;/span&gt; &lt;span class="no"&gt;Maverick&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="no"&gt;Cache&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="no"&gt;L1&lt;/span&gt; &lt;span class="k"&gt;do&lt;/span&gt;
  &lt;span class="nv"&gt;@moduledoc&lt;/span&gt; &lt;span class="sd"&gt;"""
  Maverick.Cache.L1
  """&lt;/span&gt;
  &lt;span class="kn"&gt;require&lt;/span&gt; &lt;span class="no"&gt;Logger&lt;/span&gt;

  &lt;span class="kn"&gt;use&lt;/span&gt; &lt;span class="no"&gt;Appsignal&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="no"&gt;Instrumentation&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="no"&gt;Decorators&lt;/span&gt;

  &lt;span class="n"&gt;alias&lt;/span&gt; &lt;span class="no"&gt;Maverick&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="no"&gt;Utility&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="no"&gt;Helper&lt;/span&gt;

  &lt;span class="nv"&gt;@cachex&lt;/span&gt; &lt;span class="ss"&gt;:maverick_cachex&lt;/span&gt;
  &lt;span class="nv"&gt;@ttl&lt;/span&gt; &lt;span class="mi"&gt;1800&lt;/span&gt;

  &lt;span class="nv"&gt;@decorate&lt;/span&gt; &lt;span class="n"&gt;transaction_event&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
  &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="n"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;key&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;do&lt;/span&gt;
    &lt;span class="n"&gt;cache_key&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="no"&gt;Helper&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;generate_cache_key&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;key&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="no"&gt;Logger&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;info&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;"Get from L1 Cache for &lt;/span&gt;&lt;span class="si"&gt;#{&lt;/span&gt;&lt;span class="n"&gt;cache_key&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="no"&gt;Cachex&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nv"&gt;@cachex&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;cache_key&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
  &lt;span class="k"&gt;end&lt;/span&gt;

  &lt;span class="nv"&gt;@decorate&lt;/span&gt; &lt;span class="n"&gt;transaction_event&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
  &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="n"&gt;set&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;key&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;value&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;ttl&lt;/span&gt; &lt;span class="p"&gt;\\&lt;/span&gt; &lt;span class="nv"&gt;@ttl&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;do&lt;/span&gt;
    &lt;span class="n"&gt;cache_key&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="no"&gt;Helper&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;generate_cache_key&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;key&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="no"&gt;Logger&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;info&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;"Set in L1 Cache for &lt;/span&gt;&lt;span class="si"&gt;#{&lt;/span&gt;&lt;span class="n"&gt;cache_key&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="no"&gt;Cachex&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;put&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nv"&gt;@cachex&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;cache_key&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;value&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="ss"&gt;ttl:&lt;/span&gt; &lt;span class="ss"&gt;:timer&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;seconds&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;ttl&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
  &lt;span class="k"&gt;end&lt;/span&gt;

  &lt;span class="nv"&gt;@decorate&lt;/span&gt; &lt;span class="n"&gt;transaction_event&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
  &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="n"&gt;del&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;key&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;do&lt;/span&gt;
    &lt;span class="n"&gt;cache_key&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="no"&gt;Helper&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;generate_cache_key&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;key&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="no"&gt;Logger&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;info&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;"Delete from L1 Cache for &lt;/span&gt;&lt;span class="si"&gt;#{&lt;/span&gt;&lt;span class="n"&gt;cache_key&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="no"&gt;Cachex&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;del&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nv"&gt;@cachex&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;cache_key&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
  &lt;span class="k"&gt;end&lt;/span&gt;
&lt;span class="k"&gt;end&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;L2 cache&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight elixir"&gt;&lt;code&gt;&lt;span class="k"&gt;defmodule&lt;/span&gt; &lt;span class="no"&gt;Maverick&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="no"&gt;Cache&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="no"&gt;L2&lt;/span&gt; &lt;span class="k"&gt;do&lt;/span&gt;
  &lt;span class="nv"&gt;@moduledoc&lt;/span&gt; &lt;span class="sd"&gt;"""
  Maverick.Cache.L2
  """&lt;/span&gt;
  &lt;span class="kn"&gt;require&lt;/span&gt; &lt;span class="no"&gt;Logger&lt;/span&gt;

  &lt;span class="kn"&gt;use&lt;/span&gt; &lt;span class="no"&gt;Appsignal&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="no"&gt;Instrumentation&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="no"&gt;Decorators&lt;/span&gt;

  &lt;span class="n"&gt;alias&lt;/span&gt; &lt;span class="no"&gt;Maverick&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="no"&gt;Utility&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="no"&gt;Helper&lt;/span&gt;

  &lt;span class="nv"&gt;@redis&lt;/span&gt; &lt;span class="ss"&gt;:maverick_redix&lt;/span&gt;
  &lt;span class="nv"&gt;@ttl&lt;/span&gt; &lt;span class="mi"&gt;7_776_000&lt;/span&gt;

  &lt;span class="nv"&gt;@decorate&lt;/span&gt; &lt;span class="n"&gt;transaction_event&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
  &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="n"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;key&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;do&lt;/span&gt;
    &lt;span class="n"&gt;cache_key&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="no"&gt;Helper&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;generate_cache_key&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;key&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="no"&gt;Logger&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;info&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;"Get from L2 Cache for &lt;/span&gt;&lt;span class="si"&gt;#{&lt;/span&gt;&lt;span class="n"&gt;cache_key&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="k"&gt;case&lt;/span&gt; &lt;span class="no"&gt;Redix&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;command&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nv"&gt;@redis&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"GET"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;cache_key&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt; &lt;span class="k"&gt;do&lt;/span&gt;
      &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="ss"&gt;:ok&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="no"&gt;nil&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="ss"&gt;:ok&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="no"&gt;nil&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;
      &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="ss"&gt;:ok&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;data&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="ss"&gt;:ok&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="ss"&gt;:erlang&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;binary_to_term&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;data&lt;/span&gt;&lt;span class="p"&gt;)}&lt;/span&gt;
      &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="ss"&gt;:error&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;reason&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="ss"&gt;:error&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;reason&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="k"&gt;end&lt;/span&gt;
  &lt;span class="k"&gt;end&lt;/span&gt;

  &lt;span class="nv"&gt;@decorate&lt;/span&gt; &lt;span class="n"&gt;transaction_event&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
  &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="n"&gt;set&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;key&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;value&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;ttl&lt;/span&gt; &lt;span class="p"&gt;\\&lt;/span&gt; &lt;span class="nv"&gt;@ttl&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;do&lt;/span&gt;
    &lt;span class="n"&gt;cache_key&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="no"&gt;Helper&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;generate_cache_key&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;key&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="no"&gt;Logger&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;info&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;"Set in L2 Cache for &lt;/span&gt;&lt;span class="si"&gt;#{&lt;/span&gt;&lt;span class="n"&gt;cache_key&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="no"&gt;Redix&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;command&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nv"&gt;@redis&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"SET"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;cache_key&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="ss"&gt;:erlang&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;term_to_binary&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;value&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt; &lt;span class="s2"&gt;"EX"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;ttl&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt;
  &lt;span class="k"&gt;end&lt;/span&gt;

  &lt;span class="nv"&gt;@decorate&lt;/span&gt; &lt;span class="n"&gt;transaction_event&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
  &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="n"&gt;del&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;key&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;do&lt;/span&gt;
    &lt;span class="n"&gt;cache_key&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="no"&gt;Helper&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;generate_cache_key&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;key&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="no"&gt;Logger&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;info&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;"Delete from L2 Cache for &lt;/span&gt;&lt;span class="si"&gt;#{&lt;/span&gt;&lt;span class="n"&gt;cache_key&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="no"&gt;Redix&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;command&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nv"&gt;@redis&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"DEL"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;cache_key&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt;
  &lt;span class="k"&gt;end&lt;/span&gt;

  &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="n"&gt;flush_all&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="k"&gt;do&lt;/span&gt;
    &lt;span class="no"&gt;Redix&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;command&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="ss"&gt;:redix&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"FLUSHALL"&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt;
    &lt;span class="o"&gt;|&amp;gt;&lt;/span&gt; &lt;span class="k"&gt;case&lt;/span&gt; &lt;span class="k"&gt;do&lt;/span&gt;
      &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="ss"&gt;:ok&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;status&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;status&lt;/span&gt;
      &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="ss"&gt;:error&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;reason&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;reason&lt;/span&gt;
    &lt;span class="k"&gt;end&lt;/span&gt;
  &lt;span class="k"&gt;end&lt;/span&gt;
&lt;span class="k"&gt;end&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;Cache key&lt;/code&gt; generation is done as in Nebulex using erlang's phash/2&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight elixir"&gt;&lt;code&gt;&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="n"&gt;generate_cache_key&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;key&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;do&lt;/span&gt;
    &lt;span class="ss"&gt;:erlang&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;phash2&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;key&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;|&amp;gt;&lt;/span&gt; &lt;span class="no"&gt;Integer&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;to_string&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;span class="k"&gt;end&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And my favourite version of Phil Karlton's quote is:&lt;/p&gt;

&lt;p&gt;"There are 2 hard problems in computer science: cache invalidation, naming things, and off-by-1 errors."&lt;br&gt;
-- Leon Bambrick&lt;/p&gt;

</description>
      <category>elixir</category>
      <category>cache</category>
      <category>redis</category>
      <category>distributedsystems</category>
    </item>
    <item>
      <title>Brod; boss kafka in elixir</title>
      <dc:creator>Ahsan Nabi Dar</dc:creator>
      <pubDate>Mon, 03 Jun 2024 15:32:32 +0000</pubDate>
      <link>https://forem.com/darnahsan/brod-boss-kafka-in-elixir-eo</link>
      <guid>https://forem.com/darnahsan/brod-boss-kafka-in-elixir-eo</guid>
      <description>&lt;p&gt;Elixir has rapidly gained recognition as a powerful language for developing distributed systems. Its concurrency model, based on the Erlang VM (BEAM), allows developers to build highly scalable and fault-tolerant applications. One of the areas where Elixir shines is in reading messages from Kafka using Broadway to build concurrent and multi-stage data ingestion and processing pipelines. However, when it comes to producing messages to Kafka, Elixir's ecosystem seems to lack a unified focus, leading to some confusion.&lt;/p&gt;

&lt;p&gt;Elixir's inherent support for concurrency and fault tolerance makes it an ideal choice for distributed systems. The language's lightweight process model, along with features like supervisors and the actor model, enables developers to create systems that can handle massive loads and recover gracefully from failures. This makes Elixir particularly well-suited for distributed systems, where reliability and performance are crucial.&lt;/p&gt;

&lt;p&gt;In the Elixir ecosystem, there is a clear focus on consuming messages from Kafka. Libraries like Broadway make it easy to build sophisticated data ingestion pipelines. Broadway allows developers to define multi-stage pipelines that can process large volumes of data concurrently, leveraging Elixir's strengths in concurrency and fault tolerance.&lt;/p&gt;

&lt;p&gt;While Broadway excels at consuming messages, there's a slight terminology hiccup. The method for sending messages is called produce, which might be a bit confusing. It's important to remember that Broadway focuses on the consumer side of the Kafka equation.&lt;/p&gt;

&lt;p&gt;However, when it comes to producing messages to Kafka, Elixir's libraries present a more fragmented landscape.&lt;/p&gt;

&lt;p&gt;There are three primary Kafka libraries available for Elixir developers: brod, kaffe, and kafka_ex. Each of these libraries has its own strengths and use cases.&lt;/p&gt;

&lt;p&gt;brod: An Erlang client for Kafka, brod is known for its robustness and performance. It operates seamlessly within the BEAM ecosystem, taking advantage of Erlang's mature infrastructure for building distributed systems. However, working with brod can be cumbersome and requires a fair amount of setup. Despite this, it remains a reliable and performant choice for Kafka integration.&lt;/p&gt;

&lt;p&gt;kaffe: A wrapper around brod, kaffe simplifies the process of interacting with Kafka, particularly for those using Heroku Kafka clusters. By abstracting away some of the complexities of brod, kaffe makes it easier for developers to get started with Kafka in Elixir. It focuses on providing a more user-friendly experience while still leveraging the underlying power of brod.&lt;/p&gt;

&lt;p&gt;kafka_ex: Currently in a state of transition, kafka_ex is undergoing significant changes that are not backward compatible. This situation can be likened to Python's shift from version 2 to version 3, where developers faced considerable breaking changes. While kafka_ex has been a popular choice, the ongoing transition means developers need to be cautious about using it in production until the changes stabilize.&lt;/p&gt;

&lt;p&gt;Keeping in mind all the above and scouring elixirforum.com the most shared opinion was to go with brod and write a wrapper around it. &lt;/p&gt;

&lt;p&gt;Something that lacks is over how to set it up in elixir and in particular phoenix as brod is an erlang library.&lt;/p&gt;

&lt;p&gt;To setup brod in phoenix you need to define a supervisor process that can than be added to your phoenix supervisor like other process to be part of your supervision tree. &lt;/p&gt;

&lt;p&gt;To setup the brod client with sasl and ssl as a supervisor it can be done as below.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight elixir"&gt;&lt;code&gt;&lt;span class="k"&gt;defmodule&lt;/span&gt; &lt;span class="no"&gt;Maverick&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="no"&gt;Kafka&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="no"&gt;BrodSupervisor&lt;/span&gt; &lt;span class="k"&gt;do&lt;/span&gt;
  &lt;span class="nv"&gt;@moduledoc&lt;/span&gt; &lt;span class="sd"&gt;"""
  Maverick.Kafka.BrodSupervisor
  """&lt;/span&gt;
  &lt;span class="kn"&gt;use&lt;/span&gt; &lt;span class="no"&gt;Supervisor&lt;/span&gt;

  &lt;span class="n"&gt;alias&lt;/span&gt; &lt;span class="no"&gt;Maverick&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="no"&gt;Kafka&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="ss"&gt;as:&lt;/span&gt; &lt;span class="no"&gt;Kafka&lt;/span&gt;

  &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="n"&gt;start_link&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;_config&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;do&lt;/span&gt;
    &lt;span class="no"&gt;Supervisor&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;start_link&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="bp"&gt;__MODULE__&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;[],&lt;/span&gt; &lt;span class="ss"&gt;name:&lt;/span&gt; &lt;span class="bp"&gt;__MODULE__&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
  &lt;span class="k"&gt;end&lt;/span&gt;

  &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="n"&gt;init&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;_config&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;do&lt;/span&gt;
    &lt;span class="ss"&gt;:ok&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt;
      &lt;span class="ss"&gt;:brod&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;start_client&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
        &lt;span class="no"&gt;Kafka&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;hosts&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt;
        &lt;span class="no"&gt;Kafka&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;brod_client&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt;
        &lt;span class="ss"&gt;ssl:&lt;/span&gt; &lt;span class="no"&gt;true&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="ss"&gt;ssl_options:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
          &lt;span class="c1"&gt;# from CAStore package&lt;/span&gt;
          &lt;span class="ss"&gt;cacertfile:&lt;/span&gt; &lt;span class="no"&gt;CAStore&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;file_path&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt;
          &lt;span class="ss"&gt;verify_type:&lt;/span&gt; &lt;span class="ss"&gt;:verify_peer&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
          &lt;span class="ss"&gt;customize_hostname_check:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
            &lt;span class="ss"&gt;match_fun:&lt;/span&gt; &lt;span class="ss"&gt;:public_key&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;pkix_verify_hostname_match_fun&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="ss"&gt;:https&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
          &lt;span class="p"&gt;]&lt;/span&gt;
        &lt;span class="p"&gt;],&lt;/span&gt;
        &lt;span class="ss"&gt;sasl:&lt;/span&gt; &lt;span class="no"&gt;Kafka&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;authentication&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt;
        &lt;span class="ss"&gt;auto_start_producers:&lt;/span&gt; &lt;span class="no"&gt;true&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="ss"&gt;reconnect_cool_down_seconds:&lt;/span&gt; &lt;span class="mi"&gt;10&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="ss"&gt;default_producer_config:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
          &lt;span class="ss"&gt;required_acks:&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
          &lt;span class="ss"&gt;partition_buffer_limit:&lt;/span&gt; &lt;span class="mi"&gt;1024&lt;/span&gt;
        &lt;span class="p"&gt;]&lt;/span&gt;
      &lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="n"&gt;children&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[]&lt;/span&gt;
    &lt;span class="no"&gt;Supervisor&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;init&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;children&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="ss"&gt;strategy:&lt;/span&gt; &lt;span class="ss"&gt;:one_for_one&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
  &lt;span class="k"&gt;end&lt;/span&gt;
&lt;span class="k"&gt;end&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After this you can wrap the message producer for convenience&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight elixir"&gt;&lt;code&gt;&lt;span class="k"&gt;defmodule&lt;/span&gt; &lt;span class="no"&gt;Maverick&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="no"&gt;Kafka&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="no"&gt;Brod&lt;/span&gt; &lt;span class="k"&gt;do&lt;/span&gt;
  &lt;span class="nv"&gt;@moduledoc&lt;/span&gt; &lt;span class="sd"&gt;"""
  Maverick.Kafka.Brod
  """&lt;/span&gt;

  &lt;span class="kn"&gt;use&lt;/span&gt; &lt;span class="no"&gt;Retry&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="no"&gt;Annotation&lt;/span&gt;

  &lt;span class="nv"&gt;@retry&lt;/span&gt; &lt;span class="ss"&gt;with:&lt;/span&gt; &lt;span class="n"&gt;exponential_backoff&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;|&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;randomize&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;|&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;expiry&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;10_000&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
  &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="n"&gt;produce&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;client&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;topic&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;partition&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;key&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;msg&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;do&lt;/span&gt;
    &lt;span class="ss"&gt;:brod&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;produce_sync&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;client&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;topic&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;partition&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;key&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;msg&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
  &lt;span class="k"&gt;end&lt;/span&gt;
&lt;span class="k"&gt;end&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;I highly recommend to check out &lt;code&gt;retry&lt;/code&gt; hex package it will save you from all those network issues as network is always dubious.&lt;/p&gt;

&lt;p&gt;Before I close this post the thing I would like to highlight is the &lt;code&gt;required_acks&lt;/code&gt; settings and what it means the required_acks (also known as acks) setting determines how many acknowledgements the producer requires the leader to have received before considering a request complete. This setting has a significant impact on the durability and consistency guarantees of your messages. The required_acks can be set to different values:&lt;/p&gt;

&lt;p&gt;0: The producer does not wait for any acknowledgement from the server at all. This means that the producer will not receive any acknowledgement for the messages sent, and message loss can occur if the server fails before the message is written to disk.&lt;br&gt;
1: The leader writes the record to its local log but responds without waiting for full acknowledgement from all followers. This means that the message is acknowledged as soon as the leader writes it, but before all replicas have received it.&lt;br&gt;
-1 (or all): The leader waits for the full set of in-sync replicas to acknowledge the record. This is the strongest guarantee and means that the message is considered committed only when all in-sync replicas have acknowledged it.&lt;/p&gt;

&lt;p&gt;In the context of the brod library, setting required_acks: -1 ensures that:&lt;/p&gt;

&lt;p&gt;The producer waits for acknowledgements from all in-sync replicas before considering the message successfully sent.&lt;br&gt;
This provides the highest level of durability since the message will be available even if the leader broker fails after the message is acknowledged.&lt;/p&gt;

&lt;p&gt;I hope this makes it simple for people looking to work with Kafka using Elixir.&lt;/p&gt;

</description>
      <category>elixir</category>
      <category>kafka</category>
      <category>cap</category>
      <category>distributedsystems</category>
    </item>
    <item>
      <title>Deploy Ollama with s6-overlay to serve and pull in one shot</title>
      <dc:creator>Ahsan Nabi Dar</dc:creator>
      <pubDate>Wed, 29 May 2024 17:27:44 +0000</pubDate>
      <link>https://forem.com/darnahsan/deploy-ollama-with-s6-overlay-to-serve-and-pull-in-one-shot-31cm</link>
      <guid>https://forem.com/darnahsan/deploy-ollama-with-s6-overlay-to-serve-and-pull-in-one-shot-31cm</guid>
      <description>&lt;p&gt;Ollama brings the power of Large Language Models (LLMs) directly to your local machine. It removes the complexity of cloud-based solutions by offering a user-friendly framework for running these powerful models.&lt;/p&gt;

&lt;p&gt;Ollama is a robust platform designed to simplify the process of running machine learning models locally. It offers an intuitive interface that allows users to efficiently manage and deploy models without the need for extensive technical knowledge. By streamlining the setup and execution processes, Ollama makes it accessible for developers to harness the power of advanced models directly on their local machines, promoting ease of use and faster iterations in development cycles.&lt;/p&gt;

&lt;p&gt;However, Ollama does come with a notable limitation when it comes to containerized deployments. To download and manage models, Ollama must be actively running and serving before the models can be accessed. This requirement complicates the deployment process within containers, as it necessitates additional steps to ensure the service is up and operational before any model interactions can occur. Consequently, this adds complexity to Continuous Integration (CI) and Continuous Deployment (CD) pipelines, potentially hindering seamless automation and scaling efforts.&lt;/p&gt;

&lt;p&gt;On &lt;a href="https://hub.docker.com/r/ollama/ollama" rel="noopener noreferrer"&gt;Ollama's docker hub&lt;/a&gt; it has clear instructions over how to run Ollama requiring 2 steps. In the 1st step you need to have Ollama running before you can download the model to have it ready for prompting.&lt;/p&gt;




&lt;blockquote&gt;
&lt;p&gt;docker run -d --gpus=all -v ollama:/root/.ollama -p 11434:11434 --name ollama ollama/ollama&lt;/p&gt;
&lt;/blockquote&gt;




&lt;blockquote&gt;
&lt;p&gt;docker exec -it ollama ollama run llama3&lt;/p&gt;
&lt;/blockquote&gt;




&lt;p&gt;On their Discord there is a help query about how to do this in one shot with a solution which is good but not something I would put in production to lack of orchestration and supervision of processes.  Its on github as &lt;a href="https://github.com/spara/autollama" rel="noopener noreferrer"&gt;autollama&lt;/a&gt; and I recommend to check it out to learn some new tricks.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd84p7ytk9b51ro4cqgqm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd84p7ytk9b51ro4cqgqm.png" alt="discord issue"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This is where I leveraged my past experience of using &lt;a href="https://github.com/just-containers/s6-overlay" rel="noopener noreferrer"&gt;s6-overlay&lt;/a&gt; to setup &lt;code&gt;serve&lt;/code&gt; and &lt;code&gt;pull&lt;/code&gt; in a single container with serve as a &lt;code&gt;longrun&lt;/code&gt; and pull as a &lt;code&gt;oneshot&lt;/code&gt; dependent on serve to be up and running.&lt;/p&gt;

&lt;p&gt;The directory structure for it as below&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftvstii0zqoxd19imbeu2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftvstii0zqoxd19imbeu2.png" alt="ollama-s6-dir"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;It runs flawlessly with &lt;code&gt;pull&lt;/code&gt; running well supervised and orchestrated for it to complete and even when the download gets hammered due to internet speeds it keeps the process going without a glitch.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkrbda6vtbya8ob7emu03.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkrbda6vtbya8ob7emu03.png" alt="ollama s6 downloading"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Currently there is a known &lt;a href="https://github.com/just-containers/s6-overlay/issues/577y" rel="noopener noreferrer"&gt;issue in s6-overlay&lt;/a&gt; for service wait time which initially caused the &lt;code&gt;oneshot&lt;/code&gt; to timeout. Had to S6_CMD_WAIT_FOR_SERVICES_MAXTIME=0 to disable it for the model download to not fail.&lt;/p&gt;

&lt;p&gt;It is alive, at this point I was just super happy how smoothly it came up &lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7gm1cfybocw8lqpchfjt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7gm1cfybocw8lqpchfjt.png" alt="ollama running"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;On following run &lt;code&gt;pull&lt;/code&gt; only gets the diff if any without the need to download the whole model again. &lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4axazesataaqw3xx3beg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4axazesataaqw3xx3beg.png" alt="Ollama Pull diff"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And Ollama has an api that you can prompt and its a charm to play around with.&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftx63yq8jfo1eeurf6m8f.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftx63yq8jfo1eeurf6m8f.png" alt="prompt ollama"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;With serve and pull in a single container to be served along your application it simplifies not only your deployments but also your CI to test it without overly complicating things by hacking scripts.&lt;/p&gt;

&lt;p&gt;I have put the repo on github as &lt;a href="https://github.com/ahsandar/ollama-s6/tree/main" rel="noopener noreferrer"&gt;ollama-s6&lt;/a&gt; for anyone looking to productionize their ollama deplyoments.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>ollama</category>
      <category>s6overlay</category>
      <category>docker</category>
    </item>
    <item>
      <title>Supercharge Your Ecto Queries over Postgres JSONB with Flop: Filtering, Sorting, and Pagination</title>
      <dc:creator>Ahsan Nabi Dar</dc:creator>
      <pubDate>Thu, 28 Mar 2024 10:13:33 +0000</pubDate>
      <link>https://forem.com/darnahsan/supercharge-your-ecto-queries-over-postgres-jsonb-with-flop-filtering-sorting-and-pagination-28kl</link>
      <guid>https://forem.com/darnahsan/supercharge-your-ecto-queries-over-postgres-jsonb-with-flop-filtering-sorting-and-pagination-28kl</guid>
      <description>&lt;p&gt;Elixir's &lt;a href="https://hexdocs.pm/ecto/Ecto.html"&gt;Ecto&lt;/a&gt; library is fantastic for interacting with your database. But when it comes to building dynamic user interfaces with features like filtering, sorting, and pagination, things can get a bit cumbersome.&lt;/p&gt;

&lt;p&gt;Enter &lt;a href="https://hexdocs.pm/flop/readme.html"&gt;Flop&lt;/a&gt;, a powerful Elixir library designed to simplify these tasks. Flop seamlessly integrates with Ecto, allowing you to add robust filtering, sorting, and pagination capabilities to your queries with minimal effort.&lt;/p&gt;

&lt;p&gt;Flop allows you to define custom filters for specific scenarios, providing ultimate flexibility in how you handle user input.Filter and sort data across related tables using join fields, ensuring a consistent user experience.&lt;/p&gt;

&lt;p&gt;Postgres is master DB and its support for JSONB columns offer a flexible way to store structured data within your database, but querying them can present some challenges compared to traditional columns. Ranging from Limited Indexing, Query Complexity and Optimizer Difficulties to name a few.&lt;/p&gt;

&lt;p&gt;Data can be extracted utilizing operators like -&amp;gt;&amp;gt;, @&amp;gt;, and others designed for querying JSON data. These operators allow you to navigate the JSON structure and extract specific values for filtering or sorting.&lt;/p&gt;

&lt;p&gt;Translating such operators for dynamic queries is not straightforward. Anyone who has ever added filter would understand the work required to implement the filters, operators and avoid SQL injection attacks. &lt;/p&gt;

&lt;p&gt;Flop works seamlessly over traditional database columns out of the box and also allows great features to work with joining fields across tables by defining filters with ease using &lt;a href="https://hexdocs.pm/flop/schema.html"&gt;schema configurations&lt;/a&gt;. Something that is missing in the docs is how to use flop query JSONB column fields.&lt;/p&gt;

&lt;p&gt;So I reached out to the &lt;a href="https://elixirforum.com/t/flop-filtering-sorting-and-pagination-for-ecto/51750/30"&gt;hex package author on Elixir forum&lt;/a&gt; and he suggested &lt;a href="https://elixirforum.com/t/flop-filtering-sorting-and-pagination-for-ecto/51750/31?"&gt;it could be done using join fields  or custom fields&lt;/a&gt;. I opted for custom fields over join fields as it didn't made sense to me to do a self join to find a value in this case.&lt;/p&gt;

&lt;p&gt;My initial implementation didn't work as expected and I post on &lt;a href="https://elixirforum.com/t/custom-filters-on-jsonb-column-using-flop/62519"&gt;elixir forum&lt;/a&gt; and the author provided valuable guidance over debugging the sql being generated. &lt;/p&gt;

&lt;p&gt;Few things that were to be done differently than expected for custom fields in case of querying jsonb. Below is how you can define custom fields for a jsonb column named &lt;code&gt;metadata&lt;/code&gt; with &lt;code&gt;active&lt;/code&gt; and &lt;code&gt;owner&lt;/code&gt; fields.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt; adapter_opts: [
      custom_fields: [
        metadata_active: [
          filter: {MetadataFilter, :metadata, []},
          ecto_type: :boolean,
          operators: [:==]
        ],
        metadata_owner: [
          filter: {MetadataFilter, :metadata, []},
          ecto_type: :string,
          operators: [:==]
        ]
      ]
    ]

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Filtering functionality can be defined as below&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;defmodule Maverick.Utility.MetadataFilter do
  @moduledoc """
   Maverick.Utility.MetadataFilter
  """

  import Ecto.Query

  def metadata(query, %Flop.Filter{field: name, value: value, op: op} = _flop_filter, _) do
    metadata_value = value(name, value)

    expr = dynamic_expr(name)

    case metadata_value do
      {:ok, query_value} -&amp;gt;
        conditions =
          case op do
            :== -&amp;gt; dynamic([r], ^expr == ^query_value)
            :!= -&amp;gt; dynamic([r], ^expr != ^query_value)
            :&amp;gt; -&amp;gt; dynamic([r], ^expr &amp;gt; ^query_value)
            :&amp;lt; -&amp;gt; dynamic([r], ^expr &amp;lt; ^query_value)
            :&amp;gt;= -&amp;gt; dynamic([r], ^expr &amp;gt;= ^query_value)
            :&amp;lt;= -&amp;gt; dynamic([r], ^expr &amp;lt;= ^query_value)
          end

        where(query, ^conditions)

      :error -&amp;gt;
        IO.inspect("Error casting value #{value} for #{name}")
        query
    end
  end

  def field(:metadata_active), do: :active
  def field(:metadata_owner), do: :owner

  def value(:metadata_active, value), do: Ecto.Type.cast(:boolean, value)
  def value(:metadata_owner, value), do: Ecto.Type.cast(:string, value)

  def dynamic_expr(:metadata_active) do
    dynamic(
      [r],
      fragment(
        "(?-&amp;gt;&amp;gt;'active')::boolean",
        field(r, :metadata)
      )
    )
  end

  def dynamic_expr(:metadata_owner) do
    dynamic(
      [r],
      fragment(
        "(?-&amp;gt;&amp;gt;'owner')",
        field(r, :metadata)
      )
    )
  end
end


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The record can then be filtered via API by just adding the fields as params&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;lt;url&amp;gt;?filters[0][field]=name&amp;amp;filters[0][value]=Expert&amp;amp;filters[1][field]=mobile_no&amp;amp;filters[1][value]=88827271111&amp;amp;filters[2][field]=metadata_active&amp;amp;filters[2][value]=true
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The part that was to be figured out was how dynamic fragments were getting generated as using the field name would generate SQL with it as a column rather than a filed within the metadata column.&lt;/p&gt;

&lt;p&gt;Flop is a game-changer for building dynamic and user-friendly Elixir applications. Its seamless integration with Ecto and its rich feature set make it an essential tool for streamlining data manipulation and enhancing the user experience. So, if you're looking to simplify filtering, sorting, and pagination in your Elixir projects, Flop is definitely worth exploring.&lt;/p&gt;

&lt;p&gt;It was great working with the package author to implement and debug such functionality. Elixir community is amazing and probably the best one out there. &lt;/p&gt;

&lt;p&gt;Also I would like to thank ChatGPT and Gemini for outputting garbage and hallucinating solutions wasting hours of valuable time that were only recovered by the help of the author and the elixir community.&lt;/p&gt;

</description>
      <category>elixir</category>
      <category>api</category>
      <category>postgres</category>
      <category>jsonb</category>
    </item>
    <item>
      <title>Using Plug in Elixir Phoenix to transfer custom request header or params value to HTTP Headers</title>
      <dc:creator>Ahsan Nabi Dar</dc:creator>
      <pubDate>Tue, 23 Jan 2024 01:07:37 +0000</pubDate>
      <link>https://forem.com/darnahsan/using-plug-in-elixir-phoenix-to-transfer-custom-request-header-or-params-value-to-http-headers-570a</link>
      <guid>https://forem.com/darnahsan/using-plug-in-elixir-phoenix-to-transfer-custom-request-header-or-params-value-to-http-headers-570a</guid>
      <description>&lt;p&gt;Elixir's web framework Phoenix makes use of Plugs which are composable module layers through which the HTTP request passes through allowing the incoming request to be manipulated as it is processed. As per &lt;a href="https://hexdocs.pm/phoenix/plug.html"&gt;Phoenix docs&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Plug lives at the heart of Phoenix's HTTP layer, and Phoenix puts Plug front and center. We interact with plugs at every step of the request life-cycle, and the core Phoenix components like endpoints, routers, and controllers are all just plugs internally
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Recently building an API (yes Phoenix is now my go to choice for web dev :) ) I decided to use &lt;a href="https://github.com/podium/simple_token_authentication"&gt;simple_token_authentication&lt;/a&gt; hex package instead of hand-rolling out an API Token checker. It has some nice features where it adds service name to the log metadata for the matching token and uses erlang's &lt;code&gt;persistent_term&lt;/code&gt; to make some performance gains. Also you should never hand-roll out Auth(n/z) by yourself to avoid missing out on edge cases or introducing loopholes. &lt;/p&gt;

&lt;p&gt;The only shortcoming I faced with &lt;code&gt;simple_token_authentication&lt;/code&gt; was that it requires that the API token should be sent in &lt;code&gt;Authorization&lt;/code&gt; header for e.g &lt;code&gt;Authorization: &amp;lt;API TOKEN&amp;gt;&lt;/code&gt;. Now this doesn't seem to be a problem when you are integrating your API with other services that you own (having API keys in mobile and web app are just useless for protection by exposing them publicly). This can be done as intended other than in case the &lt;code&gt;Authorization&lt;/code&gt; header is not available or restricted for use and you would need to use a custom header to pass the value. Another case and more prominent one is when integrating your API with 3rd party service or webhook where you can't pass a value in a HTTP header and have to append it to your URL.&lt;/p&gt;

&lt;p&gt;So in order to pick your API token  from HTTP header &lt;code&gt;X-API-TOKEN&lt;/code&gt; or param &lt;code&gt;x_api_token&lt;/code&gt;. You can add a plug that sits before &lt;code&gt;SimpleTokenAuthentication&lt;/code&gt; Plug in the pipeline and it would transfer the value over to Authorization header.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight elixir"&gt;&lt;code&gt;&lt;span class="k"&gt;defmodule&lt;/span&gt; &lt;span class="no"&gt;MaverickWeb&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="no"&gt;Plugs&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="no"&gt;TransferApiTokenToAuthorization&lt;/span&gt; &lt;span class="k"&gt;do&lt;/span&gt;
  &lt;span class="nv"&gt;@moduledoc&lt;/span&gt; &lt;span class="sd"&gt;"""
  The TransferApiTokenToAuthorization Plug.
  """&lt;/span&gt;

  &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="no"&gt;Plug&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="no"&gt;Conn&lt;/span&gt;

  &lt;span class="nv"&gt;@api_key_header&lt;/span&gt; &lt;span class="s2"&gt;"x-api-key"&lt;/span&gt;
  &lt;span class="nv"&gt;@api_key_param&lt;/span&gt; &lt;span class="s2"&gt;"x_api_key"&lt;/span&gt;
  &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="n"&gt;init&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;opts&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt; &lt;span class="k"&gt;do&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;opts&lt;/span&gt;

  &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="n"&gt;call&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;conn&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;_opts&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;do&lt;/span&gt;
    &lt;span class="n"&gt;token&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;get_token_from_header&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;conn&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;||&lt;/span&gt; &lt;span class="n"&gt;get_token_from_params&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;conn&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;||&lt;/span&gt; &lt;span class="s2"&gt;""&lt;/span&gt;
    &lt;span class="n"&gt;conn&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;put_req_header&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;conn&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s2"&gt;"authorization"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;token&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;conn&lt;/span&gt;
  &lt;span class="k"&gt;end&lt;/span&gt;

  &lt;span class="k"&gt;defp&lt;/span&gt; &lt;span class="n"&gt;get_token_from_header&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;conn&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;do&lt;/span&gt;
    &lt;span class="k"&gt;case&lt;/span&gt; &lt;span class="n"&gt;get_req_header&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;conn&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nv"&gt;@api_key_header&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;do&lt;/span&gt;
      &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;val&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="n"&gt;_&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;val&lt;/span&gt;
      &lt;span class="n"&gt;_&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="no"&gt;nil&lt;/span&gt;
    &lt;span class="k"&gt;end&lt;/span&gt;
  &lt;span class="k"&gt;end&lt;/span&gt;

  &lt;span class="k"&gt;defp&lt;/span&gt; &lt;span class="n"&gt;get_token_from_params&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;conn&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt; &lt;span class="k"&gt;do&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="no"&gt;Map&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;conn&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;query_params&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nv"&gt;@api_key_param&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="k"&gt;end&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can use this to transfer values  from any intended header or param to any header in your request.&lt;/p&gt;

&lt;p&gt;Banner Image : &lt;a href="https://blog.logrocket.com/wp-content/uploads/2022/10/phoenix-plugs-web-app-functions.png"&gt;https://blog.logrocket.com/wp-content/uploads/2022/10/phoenix-plugs-web-app-functions.png&lt;/a&gt;&lt;/p&gt;

</description>
      <category>elixir</category>
      <category>phoenix</category>
      <category>http</category>
      <category>programming</category>
    </item>
  </channel>
</rss>
