<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Memphis.dev team</title>
    <description>The latest articles on Forem by Memphis.dev team (@team_memphis).</description>
    <link>https://forem.com/team_memphis</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/team_memphis"/>
    <language>en</language>
    <item>
      <title>Ingesting Webhooks From Stripe – The Better Way</title>
      <dc:creator>Memphis.dev team</dc:creator>
      <pubDate>Thu, 18 Jan 2024 12:22:37 +0000</pubDate>
      <link>https://forem.com/memphis_dev/ingesting-webhooks-from-stripe-the-better-way-h58</link>
      <guid>https://forem.com/memphis_dev/ingesting-webhooks-from-stripe-the-better-way-h58</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;Learn what webhooks are, and how you use them with Stripe to react to events quickly and in real-time with greater reliability using a streaming platform. Below, we’re answering these questions and more. This post will show you everything you need to know about webhooks, including what they are, how they work, examples, and how they can be improved using Memphis.&lt;/p&gt;

&lt;h2&gt;
  
  
  What are Webhooks?
&lt;/h2&gt;

&lt;p&gt;Imagine a world where information flows seamlessly between systems. In this world, there’s no need for constant browser refreshing or sending numerous requests for updates. Welcome to the domain of webhooks, where real-time communication glides with the rhythm of efficiency and automation.&lt;/p&gt;

&lt;p&gt;Webhooks stand out for their effectiveness for both the provider and the user. The main challenge with webhooks, however, is the complexity involved in their initial setup.&lt;/p&gt;

&lt;p&gt;Often likened to reverse APIs, webhooks offer something akin to an API specification, requiring you to craft an API for the webhook to interact with. When the webhook initiates an HTTP request to your application, usually through a POST method, your task is to interpret and handle this incoming data effectively.&lt;/p&gt;

&lt;h2&gt;
  
  
  The downsides of webhooks
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Push-based&lt;/strong&gt;:
Webhooks deliver or push events to your clients’ services, requiring them to handle the resulting back pressure. While understandable, this approach can impede your customers’ progress.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Implementing a server&lt;/strong&gt;:
For your client’s services to receive webhooks, they need a server that listens to incoming events. This involves managing CORS, middleware, opening ports, and securing network access, which adds extra load to their service by increasing overall memory consumption.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Retry&lt;/strong&gt;:
Services experience frequent crashes or unavailability due to various reasons. While some triggered webhooks might lead to insignificant events, others can result in critical issues, such as incomplete datasets where orders fail to be documented in CRM or new shipping instructions are not being processed. Hence, having a robust retry mechanism becomes crucial.
4 &lt;strong&gt;Persistent&lt;/strong&gt;:
Standard webhook systems generally lack event persistence for future audits and replays.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Replay&lt;/strong&gt;:
Similarly, it boils down to the user or developer experience you aim to provide. While setting up an endpoint for users to retrieve past events is feasible, it demands meticulous handling, intricate business logic, an extra database, and increased complexity for the client.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Throttling&lt;/strong&gt;:
Throttling is a technique used in computing and networking to control data flow, requests, or operations to prevent overwhelming a system or service. It limits the rate or quantity of incoming or outgoing data, recommendations, or actions. The primary challenge lies not in implementing throttling but in managing distinct access levels for various customers. Consider having an enterprise client with notably higher throughput needs compared to others. To accommodate this, you’d require a multi-tenant webhook system tailored to support diverse demands.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Why to use webhooks with Stripe
&lt;/h2&gt;

&lt;p&gt;When you’re piecing together Stripe integrations, it’s crucial to have your applications tuned in to live events from your Stripe accounts. This way, your backend systems are always ready to spring into action based on these events.&lt;/p&gt;

&lt;p&gt;To get this real-time event flow, you’ll need to set up webhook endpoints in your application. Once you’ve registered these endpoints, Stripe becomes your real-time informant, pushing event data directly to your application’s webhook endpoint as things happen in your Stripe account. Stripe uses HTTPS to deliver these events, packaging them as JSON payloads that feature an Event object.&lt;/p&gt;

&lt;p&gt;Webhook events are your go-to for monitoring asynchronous activities. They’re perfect for keeping tabs on events like a customer’s bank giving the green light on a payment, charge disputes from customers, successful recurring payments, or managing subscription billing. With webhooks, you’re not just informed; you’re always a step ahead.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why use Memphis as your Stripe’s webhook destination
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;Convert the push to pull: Memphis.dev operates as a pull-based message broker where clients actively pull and consume data from the broker.&lt;/li&gt;
&lt;li&gt;Retry: Memphis provides a built-in retry system that maintains client states and offsets, even during disconnections. This configurable mechanism resends unacknowledged events until they’re acknowledged or until the maximum number of retries is reached.&lt;/li&gt;
&lt;li&gt;Persistent: Memphis ensures message persistence by assigning a retention policy to each topic and message.&lt;/li&gt;
&lt;li&gt;Replay: The client has the flexibility to rotate the active offset, enabling easy access to read and replay any past event that complies with the retention policy and is still stored.&lt;/li&gt;
&lt;li&gt;Back pressure: Let Memphis handle the back pressure and scale from your team and clients.&lt;/li&gt;
&lt;li&gt;Backup: You can easily enable automatic backup that will back up each and every message to an external S3-compatible storage.&lt;/li&gt;
&lt;li&gt;Dead-letter: They enable you to preserve unconsumed messages, rather than discarding them, to diagnose why their processing was unsuccessful.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  How to get started
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Head to Stripe’s webhook &lt;a href="https://dashboard.stripe.com/login?redirect=%2Fwebhooks%2Fcreate" rel="noopener noreferrer"&gt;dashboard&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr0wl9szswfs86l42ouei.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr0wl9szswfs86l42ouei.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Create a &lt;a href="https://cloud.memphis.dev/" rel="noopener noreferrer"&gt;Memphis account&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Create a new Memphis station&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fow3gzkv7xjxfz2flc0s3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fow3gzkv7xjxfz2flc0s3.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Create a new client-type user and generate a URL for producing data&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxnv8mnt8svy6lw6n69uy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxnv8mnt8svy6lw6n69uy.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F763yrw00ivvuhaskkdrq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F763yrw00ivvuhaskkdrq.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7t073580jfuvux9wcx37.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7t073580jfuvux9wcx37.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Copy the produce URL to the Stripe dashboard and click “Add endpoint”&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyl9b7cn2di5rzlxwbkku.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyl9b7cn2di5rzlxwbkku.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Once a selected event will occur, it will trigger an event that will be sent to your Memphis Station&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmdceam0z2zezg9kns2bx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmdceam0z2zezg9kns2bx.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;p&gt;&lt;a href="https://share.hsforms.com/1lBALaPyfSRS3FLLLy_Hsfgcqtej" rel="noopener noreferrer"&gt;Join 4500+ others and sign up for our data engineering newsletter&lt;/a&gt;.&lt;/p&gt;




&lt;p&gt;Originally published at Memphis.dev By &lt;a href="https://www.linkedin.com/in/shoham-roditi-elimelech-0b933314a/" rel="noopener noreferrer"&gt;Shoham Roditi Elimelech&lt;/a&gt;, software engineer at @&lt;a href="https://memphis.dev/blog/ingesting-webhooks-from-stripe-the-better-way/" rel="noopener noreferrer"&gt;Memphis.dev&lt;/a&gt;&lt;/p&gt;




&lt;p&gt;Follow Us to get the latest updates!&lt;br&gt;
&lt;a href="https://github.com/memphisdev/memphis" rel="noopener noreferrer"&gt;Github&lt;/a&gt; • &lt;a href="https://docs.memphis.dev/memphis/getting-started/readme" rel="noopener noreferrer"&gt;Docs&lt;/a&gt; • &lt;a href="https://discord.gg/sGgmCQP7jc" rel="noopener noreferrer"&gt;Discord&lt;/a&gt;&lt;/p&gt;

</description>
      <category>webhooks</category>
      <category>stripe</category>
      <category>messagebroker</category>
    </item>
    <item>
      <title>Event Sourcing with Memphis.dev: A Beginner’s Guide</title>
      <dc:creator>Memphis.dev team</dc:creator>
      <pubDate>Thu, 04 Jan 2024 12:14:54 +0000</pubDate>
      <link>https://forem.com/memphis_dev/event-sourcing-with-memphisdev-a-beginners-guide-71a</link>
      <guid>https://forem.com/memphis_dev/event-sourcing-with-memphisdev-a-beginners-guide-71a</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;In the realm of modern software development, managing and maintaining data integrity is paramount. Traditional approaches often involve updating the state of an application directly within a database. However, as systems grow in complexity, ensuring data consistency and traceability becomes more challenging. This is where Event Sourcing, coupled with a powerful distributed streaming platform like Memphis.dev, emerges as a robust solution and a great data structure to work with.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is Event Sourcing?
&lt;/h2&gt;

&lt;p&gt;At its core, Event Sourcing is a design pattern that captures every change or event that occurs in a system as an immutable and sequentially ordered log of events. Instead of persisting the current state of an entity, Event Sourcing stores a sequence of state-changing events. These events serve as a single source of truth for the system’s state at any given point in time.&lt;/p&gt;

&lt;h2&gt;
  
  
  Understanding Event Sourcing in Action
&lt;/h2&gt;

&lt;p&gt;Imagine a banking application that tracks an account’s balance. Instead of only storing the current balance in a database, Event Sourcing captures all the events that alter the balance. Deposits, withdrawals, or any adjustments are recorded as individual events in chronological order.&lt;/p&gt;

&lt;p&gt;Let’s break down how this might work:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Event Creation&lt;/strong&gt;: When a deposit of $100 occurs in the account, an event, such as FundsDeposited, with relevant metadata (timestamp, amount, account number) is created.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Event Storage&lt;/strong&gt;: These events are then appended to an immutable log, forming a sequential history of transactions specific to that account.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;State Reconstruction&lt;/strong&gt;: To obtain the current state of an account, the application replays these events sequentially to compute the current balance. Each event is applied in order to derive the current balance, enabling the system to rebuild state at any given point in time.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Leveraging Memphis in Event Sourcing
&lt;/h2&gt;

&lt;p&gt;Memphis.dev, an open-source distributed event streaming platform, is perfect for implementing Event Sourcing due to its features:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Scalability and Fault Tolerance&lt;/strong&gt;: Memphis’ distributed nature allows for horizontal scalability and ensures fault tolerance by replicating data across multiple brokers (nodes).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Ordered and Immutable Event Logs&lt;/strong&gt;: Memphis’ log-based architecture aligns seamlessly with the principles of Event Sourcing. It maintains ordered, immutable logs, preserving the sequence of events.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Real-time Event Processing&lt;/strong&gt;: Memphis Functions offers a serverless framework, built within the Memphis platform to handle high-throughput, real-time event streams. Applications can process events as they occur, enabling near real-time reactions to changes in state.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Managing Schemas&lt;/strong&gt;: One of the major challenges in event sourcing is maintaining schemas across the different events to avoid upstream breaks and client crashes.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Benefits of Event Sourcing with Kafka
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Temporal Queries and Auditing&lt;/strong&gt;: By retaining a complete history of events, it becomes possible to perform temporal queries and reconstruct past states, aiding in auditing and compliance.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Flexibility and Scalability&lt;/strong&gt;: As the system grows, Event Sourcing with Memphis allows for easy scalability, as new consumers can independently process the event log.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Fault Tolerance and Recovery&lt;/strong&gt;: In the event of failures, the ability to rebuild state from events ensures resiliency and quick recovery.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Let’s see what it looks like via code
&lt;/h2&gt;

&lt;p&gt;Events occur and are pushed by their order of creation into some Memphis Station (=topic)&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Event Log:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;class EventLog:
    def __init__(self):
        self.events = []

    def append_event(self, event):
        self.events.append(event)

    def get_events(self):
        return self.events
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Memphis Producer:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from __future__ import annotations
import asyncio
from memphis import Memphis, Headers, MemphisError, MemphisConnectError, MemphisHeaderError, MemphisSchemaError
import json

class MemphisEventProducer:
    def __init__(self,host="my.memphis.dev"):
        try:
        self.memphis = Memphis()
        await self.memphis.connect(host=host, username="&amp;lt;application type username&amp;gt;", password="&amp;lt;password&amp;gt;")

    def send_event(self, topic, event):
        await self.memphis.produce(station_name=topic, producer_name='prod_py',
  message=event,nonblocking=False)
        except (MemphisError, MemphisConnectError, MemphisHeaderError, MemphisSchemaError) as e:
          print(e)
        finally:
          await self.memphis.close()

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Usage:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Initialize Event Log
event_log = EventLog()

# Initialize Memphis Producer
producer = MemphisEventProducer()

# Append events to the event log and produce them to Memphis
events_to_publish = [
    {"type": "Deposit", "amount": 100},
    {"type": "Withdrawal", "amount": 50},
    # Add more events as needed
]

for event in events_to_publish:
    event_log.append_event(event)
    producer.send_event('account-events', event)

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  Criteria to choose the right event streaming platform for the job
&lt;/h2&gt;

&lt;p&gt;When implementing Event Sourcing with a message broker, several key features are crucial for a streamlined and efficient system:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Persistent Message Storage:&lt;/strong&gt;&lt;br&gt;
Durability: Messages should be reliably stored even in the event of failures. This ensures that no events are lost and the event log remains intact.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Ordered and Immutable Event Logs:&lt;/strong&gt;&lt;br&gt;
Sequential Order: Preserving the order of events is critical for accurate state reconstruction. Events must be processed in the same sequence they were produced.&lt;br&gt;
Immutability: Once an event is stored, it should not be altered. This guarantees the integrity and consistency of the event log.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Scalability and Performance:&lt;/strong&gt;&lt;br&gt;
Horizontal Scalability: The message broker should support horizontal scaling to accommodate increased event volume without sacrificing performance.&lt;br&gt;
Low Latency: Minimizing message delivery time ensures near real-time processing of events, enabling quick reactions to state changes.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Fault Tolerance and High Availability:&lt;/strong&gt;&lt;br&gt;
Redundancy: Ensuring data redundancy across multiple nodes or partitions prevents data loss in the event of node failures.&lt;br&gt;
High Availability: Continuous availability of the message broker is essential to maintain system functionality.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Consumer Flexibility and State Rebuilding:&lt;/strong&gt;&lt;br&gt;
Consumer Groups: Support for consumer groups allows multiple consumers to independently process the same set of events, aiding in parallel processing and scalability.&lt;br&gt;
State Rebuilding: The broker should facilitate easy rebuilding of the application state by replaying events, enabling historical data retrieval.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Retention Policies and Archiving:&lt;/strong&gt;&lt;br&gt;
Retention Policies: Configurable retention policies allow managing the duration or size of stored messages. This ensures efficient storage management.&lt;br&gt;
Archiving: Ability to archive or offload older events to long-term storage for compliance or historical analysis purposes.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Monitoring and Management:&lt;/strong&gt;&lt;br&gt;
Metrics and Monitoring: Providing insights into message throughput, latency, and system health helps in monitoring and optimizing system performance.&lt;br&gt;
Admin Tools: Easy-to-use administrative tools for managing topics, partitions, and configurations streamline system management.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Security and Compliance:&lt;/strong&gt;&lt;br&gt;
Encryption and Authentication: Support for encryption and authentication mechanisms ensures the confidentiality and integrity of transmitted events.&lt;br&gt;
Compliance Standards: Adherence to compliance standards (such as GDPR, SOC2) ensures that sensitive data is handled appropriately.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Seamless Integration and Ecosystem Support:&lt;br&gt;
Compatibility and Integrations: Seamless integration with various programming languages and frameworks, along with support for diverse ecosystems, enhances usability.&lt;br&gt;
Ecosystem Tools: Availability of connectors, libraries, and frameworks that facilitate Event Sourcing simplifies implementation and reduces development efforts.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Choosing a message broker that aligns with these critical features is essential for implementing robust Event Sourcing, ensuring data integrity, scalability, and resilience within your application architecture.&lt;/p&gt;




&lt;h2&gt;
  
  
  Event Sourcing using a Database vs a Message Broker (Streaming Platform)
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Use Case Complexity&lt;/strong&gt;: For simpler applications or where scalability isn’t a primary concern, databases might suffice. For higher reliability, distributed systems needing high scalability, and real-time processing, a message broker can be more suitable.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Replay&lt;/strong&gt;: In event streaming platforms or message brokers, events are stored in a FIFO manner, one after the other as they first appear. That nature also makes it easier for the consumer on the other side to understand the natural flow of events and replay the entire “scene,” whereas in databases, it is not the case, and additional fields must be added, like timestamps, to organize the data based on time. It also requires additional logic to understand the latest state of an entity.&lt;/p&gt;




&lt;p&gt;Continue your learning: &lt;a href="https://memphis.dev/blog/event-sourcing-outgrows-databases/"&gt;read&lt;/a&gt; how and why event sourcing outgrows the database.&lt;/p&gt;




&lt;p&gt;&lt;a href="https://share.hsforms.com/1lBALaPyfSRS3FLLLy_Hsfgcqtej"&gt;Join 4500+ others and sign up for our data engineering newsletter.&lt;/a&gt;&lt;/p&gt;




&lt;p&gt;Originally published at Memphis.dev By &lt;a href="https://www.linkedin.com/in/idan-asulin/"&gt;Idan Asulin&lt;/a&gt;, Co-Founder &amp;amp; CTO at @Memphis.dev.&lt;/p&gt;




&lt;p&gt;Follow Us to get the latest updates!&lt;br&gt;
&lt;a href="https://github.com/memphisdev/memphis"&gt;Github&lt;/a&gt; • &lt;a href="https://twitter.com/Memphis_Dev"&gt;Twitter&lt;/a&gt; • &lt;a href="https://discord.com/invite/WZpysvAeTf"&gt;Discord&lt;/a&gt;&lt;/p&gt;

</description>
      <category>dataintegrity</category>
      <category>dataconsistency</category>
      <category>streamingdata</category>
    </item>
    <item>
      <title>Comparing Webhooks and Event Consumption: A Comparative Analysis</title>
      <dc:creator>Memphis.dev team</dc:creator>
      <pubDate>Thu, 28 Dec 2023 14:59:11 +0000</pubDate>
      <link>https://forem.com/memphis_dev/comparing-webhooks-and-event-consumption-a-comparative-analysis-37a3</link>
      <guid>https://forem.com/memphis_dev/comparing-webhooks-and-event-consumption-a-comparative-analysis-37a3</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;In event-driven architecture and API integration, two vital concepts stand out: webhooks and event consumption. Both are mechanisms used to facilitate communication between different applications or services. Yet, they differ significantly in their approaches and functionalities, and by the end of this article, you will learn why consuming events can be a much more robust option than serving them using a webhook.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The foundational premise of the article assumes you function as a platform that wants or already delivers internal events to your clients through webhooks.&lt;/strong&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  Webhooks
&lt;/h2&gt;

&lt;p&gt;Webhooks are user-defined HTTP callbacks triggered by specific events on a service. They enable real-time communication between systems by notifying other applications when a particular event occurs. Essentially, webhooks eliminate the need to do manual polling or checking for updates, allowing for a more efficient, event-driven, and responsive system.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Key Features of Webhooks:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Event-driven:&lt;/strong&gt; Webhooks are event-driven and are only triggered when a specified event occurs. For example, a webhook can notify an application when a new user signs up or when an order is placed.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Outbound Requests:&lt;/strong&gt; They use HTTP POST requests to send data payloads to a predefined URL the receiving application provides.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Asynchronous Nature:&lt;/strong&gt; Webhooks operate asynchronously, allowing the sending and receiving systems to continue their processes independently.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Event Consumption
&lt;/h2&gt;

&lt;p&gt;Event consumption involves receiving, processing, and acting upon events emitted by various systems or services. This mechanism facilitates the seamless integration and synchronization of data across different applications.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Key Features of Event Consumption*:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Message Queues or Brokers:&lt;/strong&gt; Event consumption often involves utilizing message brokers like Memphis.dev, Kafka, RabbitMQ, or AWS SQS to manage and distribute events.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Subscriber-Driven:&lt;/strong&gt; Unlike webhooks, event consumption relies on subscribers who listen to event streams and process incoming events.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Scalability:&lt;/strong&gt; Event consumption systems are highly scalable, efficiently handling large volumes of events.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Architectural questions for a better decision
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Push-based VS. Pull based:&lt;/strong&gt;&lt;br&gt;
Webhooks deliver or push events to your clients’ services, requiring them to handle the resulting back pressure. While understandable, this approach can impede your customers’ progress. Using a scalable message broker to support consumption can alleviate this burden for your clients. How? By allowing clients to pull events based on their availability.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;I*&lt;em&gt;mplementing a server vs. Implementing a broker SDK:&lt;/em&gt;*&lt;br&gt;
For your client’s services to receive webhooks, they need a server that listens to incoming events. This involves managing CORS, middleware, opening ports, and securing network access, which adds extra load to their service by increasing overall memory consumption.&lt;br&gt;
Opting for pull-based consumption eliminates most of these requirements. With pull-based consumption, as the traffic is egress (outgoing) rather than ingress (incoming), there’s no need to set up a server, open ports, or handle CORS. Instead, the client’s service initiates the communication, significantly reducing complexity.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Retry:&lt;/strong&gt;&lt;br&gt;
Services experience frequent crashes or unavailability due to various reasons. While some triggered webhooks might lead to insignificant events, others can result in critical issues, such as incomplete datasets where orders fail to be documented in CRM or new shipping instructions are not being processed. Hence, having a robust retry mechanism becomes crucial. This can be achieved by incorporating a retry mechanism within the webhook system or introducing an endpoint within the service.&lt;br&gt;
In contrast, when utilizing a message broker, events are acknowledged only after processing. Although implementing a retry mechanism is necessary in most cases, it’s typically more straightforward and native than handling retries with webhooks.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Persistent:&lt;/strong&gt;&lt;br&gt;
Standard webhook systems generally lack event persistence for future audits and replays, a capability inherently provided by persisted message brokers.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Replay:&lt;/strong&gt;&lt;br&gt;
Similarly, it boils down to the user or developer experience you aim to provide. While setting up an endpoint for users to retrieve past events is feasible, it demands meticulous handling, intricate business logic, an extra database, and increased complexity for the client. In contrast, using a message broker supporting this feature condenses the process to just a line or two of code, significantly reducing complexity.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Throttling:&lt;/strong&gt;&lt;br&gt;
Throttling is a technique used in computing and networking to control data flow, requests, or operations to prevent overwhelming a system or service. It limits the rate or quantity of incoming or outgoing data, recommendations, or actions. The primary challenge lies not in implementing throttling but in managing distinct access levels for various customers. Consider having an enterprise client with notably higher throughput needs compared to others. To accommodate this, you’d require a multi-tenant webhook system tailored to support diverse demands or opt for a message broker or streaming platform designed to handle such differential requirements.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;




&lt;h2&gt;
  
  
  Memphis as a tailor-made solution for the task
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--GyNsQpM5--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/b9yom07p44e0fmw3yrfk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--GyNsQpM5--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/b9yom07p44e0fmw3yrfk.png" alt="Image description" width="800" height="834"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We are still iterating on the subject, so if you have any thoughts or ideas, I would love to learn from them: &lt;a href="mailto:idan@memphis.dev"&gt;idan@memphis.dev&lt;/a&gt;&lt;/p&gt;

</description>
      <category>webhooks</category>
      <category>eventconsumption</category>
      <category>eventdriven</category>
      <category>datastructures</category>
    </item>
    <item>
      <title>How to handle API rate limitations with a queue</title>
      <dc:creator>Memphis.dev team</dc:creator>
      <pubDate>Wed, 20 Dec 2023 14:57:10 +0000</pubDate>
      <link>https://forem.com/memphis_dev/how-to-handle-api-rate-limitations-with-a-queue-8a8</link>
      <guid>https://forem.com/memphis_dev/how-to-handle-api-rate-limitations-with-a-queue-8a8</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;Rate limitation refers to restricting the number of times a specific action can be performed within a certain time frame. For example, an API might have rate limitations restricting user or app requests within a given period. This helps prevent server overload, ensures fair usage, and maintains system stability and security.&lt;/p&gt;

&lt;p&gt;Rate limitation is also a challenge for the apps that encounter it, as it requires to “slow down” or pause. Here’s a typical scenario:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Initial Request:&lt;/strong&gt; When the app initiates communication with the API, it requests specific data or functionality.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;API Response:&lt;/strong&gt; The API processes the request and responds with the requested information or performs the desired action.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Rate-Limitation:&lt;/strong&gt; If the app has reached the limit, it will usually need to wait until the next designated time frame (like a minute to an hour) before making additional requests. If it is a “soft” rate limitation and timeframes are known and linear, it’s easier to handle. Often, the waiting time climbs and increases in every block, requiring a whole different and custom handling per each API.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Handling Rate Limit Exceedances:&lt;/strong&gt; If the app exceeds the rate limit, it might receive an error response from the API (such as a “429 Too Many Requests” status code). The app needs to handle this gracefully, possibly by queuing requests, implementing backoff strategies (waiting for progressively more extended periods before retrying), or informing the user about the rate limit being reached.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;To effectively operate within rate limitations, apps often incorporate strategies like:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Throttling:&lt;/strong&gt; Regulating the rate of outgoing requests to align with the API’s rate limit.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Caching:&lt;/strong&gt; Storing frequently requested data locally to reduce the need for repeated API calls.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Exponential Backoff:&lt;/strong&gt; Implementing a strategy where the app waits increasingly longer between subsequent retries after hitting a rate limit to reduce server load and prevent immediate retries.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Queue&lt;/strong&gt;? More in the next section&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Using a queue
&lt;/h2&gt;

&lt;p&gt;A queue serves as an excellent “sidekick” or tool for helping services manage rate limitations due to its ability to handle tasks systematically. However, while it offers significant benefits, it’s not a standalone solution for this purpose.&lt;/p&gt;

&lt;p&gt;In constructing a robust architecture, the service or app used to interact with an external API subject to rate limitations often handles tasks asynchronously. This service is typically initiated by tasks derived from a queue. When the service encounters a rate limit, it can easily return the job to the main queue, or assign it to a separate queue designated for delayed tasks, and revisit it after a specific waiting period, say X seconds.&lt;/p&gt;

&lt;p&gt;This reliance on a queue system is highly advantageous, primarily because of its temporary nature and ordering. However, the queue alone cannot fully address rate limitations; it requires additional features or help from the service itself to effectively handle these constraints.&lt;/p&gt;

&lt;h2&gt;
  
  
  Challenges may arise when utilizing a queue:
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;Tasks re-entering the queue might return earlier than necessary, as their timing isn’t directly controlled by your service.&lt;/li&gt;
&lt;li&gt;Exceeding rate limitations due to frequent calls within restricted timeframes. This may necessitate implementing sleep or wait mechanisms, commonly considered poor practice due to their potential impact on performance and responsiveness.&lt;/li&gt;
&lt;/ol&gt;




&lt;h2&gt;
  
  
  Here is what it will look like with RabbitMQ:
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const amqp = require('amqplib');
const axios = require('axios');

// Function to make API requests, simulating rate limitations
async function makeAPICall(url) {
  try {
    const response = await axios.get(url);
    console.log('API Response:', response.data);
  } catch (error) {
    console.error('API Error:', error.message);
  }
}

// Connect to RabbitMQ server
async function connect() {
  try {
    const connection = await amqp.connect('amqp://localhost');
    const channel = await connection.createChannel();

    const queue = 'rateLimitedQueue';
    channel.assertQueue(queue, { durable: true });

    // Consume messages from the queue
    channel.consume(queue, async msg =&amp;gt; {
      const { url, delayInSeconds } = JSON.parse(msg.content.toString());

      // Simulating rate limitation
      await new Promise(resolve =&amp;gt; setTimeout(resolve, delayInSeconds * 1000));

      await makeAPICall(url); // Make the API call

      channel.ack(msg); // Acknowledge message processing completion
    });
  } catch (error) {
    console.error('RabbitMQ Connection Error:', error.message);
  }
}

// Function to send a message to the queue
async function addToQueue(url, delayInSeconds) {
  try {
    const connection = await amqp.connect('amqp://localhost');
    const channel = await connection.createChannel();

    const queue = 'rateLimitedQueue';
    channel.assertQueue(queue, { durable: true });

    const message = JSON.stringify({ url, delayInSeconds });
    channel.sendToQueue(queue, Buffer.from(message), { persistent: true });

    console.log('Task added to the queue');
  } catch (error) {
    console.error('RabbitMQ Error:', error.message);
  }
}

// Usage example
addToQueue('https://api.example.com/data', 5); // Add an API call with a delay of 5 seconds

// Start the consumer
connect();

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  Or with Kafka
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const { Kafka } = require('kafkajs');
const axios = require('axios');

// Function to make API requests, simulating rate limitations
async function makeAPICall(url) {
  try {
    const response = await axios.get(url);
    console.log('API Response:', response.data);
  } catch (error) {
    console.error('API Error:', error.message);
  }
}

// Kafka configuration
const kafka = new Kafka({
  clientId: 'my-app',
  brokers: ['localhost:9092'], // Replace with your Kafka broker address
});

// Create a Kafka producer
const producer = kafka.producer();

// Connect to Kafka and send messages
async function produceToKafka(topic, message) {
  await producer.connect();
  await producer.send({
    topic,
    messages: [{ value: message }],
  });
  await producer.disconnect();
}

// Create a Kafka consumer
const consumer = kafka.consumer({ groupId: 'my-group' });

// Consume messages from Kafka topic
async function consumeFromKafka(topic) {
  await consumer.connect();
  await consumer.subscribe({ topic });
  await consumer.run({
    eachMessage: async ({ message }) =&amp;gt; {
      const { url, delayInSeconds } = JSON.parse(message.value.toString());

      // Simulating rate limitation
      await new Promise(resolve =&amp;gt; setTimeout(resolve, delayInSeconds * 1000));

      await makeAPICall(url); // Make the API call
    },
  });
}

// Usage example - Sending messages to Kafka topic
async function addToKafka(topic, url, delayInSeconds) {
  const message = JSON.stringify({ url, delayInSeconds });
  await produceToKafka(topic, message);
  console.log('Message added to Kafka topic');
}

// Start consuming messages from Kafka topic
const kafkaTopic = 'rateLimitedTopic';
consumeFromKafka(kafkaTopic);

// Usage example - Adding messages to Kafka topic
addToKafka('rateLimitedTopic', 'https://api.example.com/data', 5); // Add an API call with a delay of 5 seconds
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;p&gt;Both approaches are legitimate, yet they necessitate your service to incorporate a ‘sleep’ mechanism.&lt;/p&gt;

&lt;p&gt;With Memphis, you can offload the delay from the client to the queue using a simple feature made&lt;br&gt;
just for that purpose and called “Delayed Messages”. Delayed messages allow you to send a received message back to the broker when your consumer application requires extra processing time.&lt;/p&gt;

&lt;p&gt;What sets apart Memphis’ implementation is the consumer’s capability to control this delay independently and atomically.&lt;br&gt;
Within the station, the count of unconsumed messages doesn’t impact the consumption of delayed messages. For instance, if a 60-second delay is necessary, it precisely configures the invisibility time for that specific message.&lt;/p&gt;

&lt;h2&gt;
  
  
  Memphis.dev Delayed Messages
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt; message is received by the consumer group.&lt;/li&gt;
&lt;li&gt;An event occurs, prompting the consumer group to pause processing the message.&lt;/li&gt;
&lt;li&gt;Assuming the &lt;code&gt;maxMsgDeliveries&lt;/code&gt; hasn’t hit its limit, the consumer will activate &lt;code&gt;message.delay(delayInMilliseconds)&lt;/code&gt;, bypassing the message. Instead of immediately reprocessing the same message, the broker will retain it for the specified duration.&lt;/li&gt;
&lt;li&gt;The subsequent message will be consumed.&lt;/li&gt;
&lt;li&gt;Once the requested delayInMilliseconds has passed, the broker will halt the primary message flow and reintroduce the delayed message into circulation.
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const { memphis } = require('memphis-dev');

// Function to make API requests, simulating rate limitations 
async function makeAPICall(message) 
{ 
  try { 
    const response = await axios.get(message.getDataAsJson()['url']); 
    console.log('API Response:', response.data); 
    message.ack();
  } catch (error) { 
    console.error('API Error:', error.message); 
    console.log("Delaying message for 1 minute"); 
    message.delay(60000);
  } 
}

(async function () {
    let memphisConnection;

    try {
        memphisConnection = await memphis.connect({
            host: '&amp;lt;broker-hostname&amp;gt;',
            username: '&amp;lt;application-type username&amp;gt;',
            password: '&amp;lt;password&amp;gt;'
        });

        const consumer = await memphisConnection.consumer({
            stationName: '&amp;lt;station-name&amp;gt;',
            consumerName: '&amp;lt;consumer-name&amp;gt;',
            consumerGroup: ''
        });

        consumer.setContext({ key: "value" });
        consumer.on('message', (message, context) =&amp;gt; {
            await makeAPICall(url, message);
        });

        consumer.on('error', (error) =&amp;gt; { });
    } catch (ex) {
        console.log(ex);
        if (memphisConnection) memphisConnection.close();
    }
})();
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  Wrapping up
&lt;/h2&gt;

&lt;p&gt;Understanding and adhering to rate limitations is crucial for app developers working with APIs. It involves managing request frequency, handling errors when limits are reached, implementing backoff strategies to prevent overloading the API servers, and utilizing rate limit information provided by the API to optimize app performance, and now you know how to do it with a queue as well!&lt;/p&gt;

&lt;p&gt;Head to our &lt;a href="https://memphis.dev/blog/"&gt;blog&lt;/a&gt; or [docs(&lt;a href="https://docs.memphis.dev/memphis/getting-started/readme"&gt;https://docs.memphis.dev/memphis/getting-started/readme&lt;/a&gt;) for more examples like that!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://share.hsforms.com/1lBALaPyfSRS3FLLLy_Hsfgcqtej"&gt;Join 4500+ others and sign up for our data engineering newsletter.&lt;/a&gt;&lt;/p&gt;




&lt;p&gt;Originally published at Memphis.dev By &lt;a href="https://www.linkedin.com/in/idan-asulin/"&gt;Idan Asulin&lt;/a&gt;, Co-Founder &amp;amp; CTO at @Memphis.dev.&lt;/p&gt;




&lt;p&gt;Follow Us to get the latest updates!&lt;br&gt;
&lt;a href="https://github.com/memphisdev/memphis"&gt;Github&lt;/a&gt; • &lt;a href="https://twitter.com/Memphis_Dev"&gt;Twitter&lt;/a&gt; • &lt;a href="https://discord.com/invite/WZpysvAeTf"&gt;Discord&lt;/a&gt;&lt;/p&gt;

</description>
      <category>api</category>
      <category>ratelimitations</category>
      <category>messagequeue</category>
    </item>
    <item>
      <title>Real-Time Data Scrubbing Before Storing In A Data Warehouse</title>
      <dc:creator>Memphis.dev team</dc:creator>
      <pubDate>Wed, 20 Dec 2023 08:22:03 +0000</pubDate>
      <link>https://forem.com/memphis_dev/real-time-data-scrubbing-before-storing-in-a-data-warehouse-1a1</link>
      <guid>https://forem.com/memphis_dev/real-time-data-scrubbing-before-storing-in-a-data-warehouse-1a1</guid>
      <description>&lt;p&gt;Between January 2023 and May 2023, companies violating general data processing principles incurred fines totaling 1.86 billion USD (!!!).&lt;/p&gt;

&lt;p&gt;In today’s data-driven landscape, the importance of data accuracy and compliance cannot be overstated. As businesses amass vast amounts of information, the need to ensure data integrity, especially PII storing, becomes paramount. Data scrubbing emerges as a crucial process, particularly in real-time scenarios, before storing information in a data warehouse.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;Data Scrubbing in the context of compliance&lt;/strong&gt;&lt;br&gt;
Data scrubbing, often referred to as data cleansing or data cleaning, involves the identification and rectification of errors or inconsistencies in a dataset. In the context of compliance, it means removing certain values that qualify as PII that cannot be stored or should be handled differently.&lt;/p&gt;

&lt;p&gt;Real-time data scrubbing takes the cleansing process a step further by ensuring that incoming data is cleaned and validated instantly, before being stored in a data warehouse.&lt;/p&gt;

&lt;p&gt;Compliance standards, such as GDPR, HIPAA, or industry-specific regulations, mandate stringent requirements for data accuracy, privacy, and security. Failure to adhere to these standards can result in severe repercussions, including financial penalties and reputational damage. Real-time data scrubbing acts as a robust preemptive measure, ensuring that only compliant data is integrated into the warehouse.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Event-driven Scrubbing&lt;/strong&gt;&lt;br&gt;
Event-driven applications stand as stateful systems that intake events from one or multiple streams and respond to these incoming events by initiating computations, updating their state, or triggering external actions.&lt;/p&gt;

&lt;p&gt;They represent a progressive shift from the conventional application structure that segregates computation and data storage into distinct tiers. In this novel architecture, these applications retrieve data from and save data to a remote transactional database.&lt;/p&gt;

&lt;p&gt;In stark contrast, event-driven applications revolve around stateful stream processing frameworks. This approach intertwines data and computation, facilitating localized data access either in-memory or through disk storage. To ensure resilience, these applications implement fault-tolerance measures by periodically storing checkpoints in remote persistent storage.&lt;/p&gt;

&lt;p&gt;In the context of Scrubbing, it means that the actual action of scrubbing will take place for each ingested event, in real-time, powering up only when new events arrive, and immediately after, not based on constant times, usually performed on top of the database, after being stored, meaning the potential violation already took place.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How does Memphis Functions support such a use case?&lt;/strong&gt;&lt;br&gt;
At times, a more comprehensive policy-driven cleansing may be necessary. However, if a quick, large-scale ‘eraser’ is what you require, Memphis Functions can offer an excellent solution. The diagram illustrates two options: data sourced from either a Kafka topic or a Memphis station, potentially both concurrently. This data passes through a Memphis Function named ‘&lt;a href="https://github.com/memphisdev/memphis-dev-functions/tree/master/remove-fields"&gt;remove-fields&lt;/a&gt;‘ before progressing to the data warehouse for further storage.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--N2Q-RJ5H--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/fxy7vof8xeb9cv1fzygn.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--N2Q-RJ5H--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/fxy7vof8xeb9cv1fzygn.jpeg" alt="Image description" width="800" height="191"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Behind the curtain, events or streaming data are grouped into batches, a configuration determined by the user’s specifications. These batches then undergo processing via a serverless function, specifically the ‘remove-fields’ function, meticulously designed to cleanse the ingested data according to pre-established rules. Following this scrubbing process, the refined data is either consumed internally or routed to a different Kafka topic, alternatively being swiftly directed straight to the Data Warehouse (DWH) for immediate utilization.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Usage example&lt;/strong&gt;&lt;br&gt;
Before&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  "id": 123456789,
  "full_name": "Peter Parker",
  "gender": "male"
}
After (Removing ‘gender’)

{
  "id": 123456789,
  "full_name": "Peter Parker",
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Next steps&lt;/strong&gt;&lt;br&gt;
An ideal follow-up action would involve implementing schema enforcement. Data warehouses are renowned for their rigorous schema enforcement practices. By integrating both a transformation layer and schema validation, it’s possible to significantly elevate data quality while reducing the risk of potential disruptions or breaks in the system. This can simply take place by attaching a Schemaverse schema to the station.&lt;/p&gt;

&lt;p&gt;Start by &lt;a href="https://cloud.memphis.dev/"&gt;signing up&lt;/a&gt; to Memphis Cloud. We have a great free plan that can get you up and running in no time, and try to build a pipeline yourself.&lt;/p&gt;




&lt;p&gt;&lt;a href="https://share.hsforms.com/1lBALaPyfSRS3FLLLy_Hsfgcqtej"&gt;Join 4500+ others and sign up for our data engineering newsletter.&lt;/a&gt;&lt;/p&gt;




&lt;p&gt;Originally published at Memphis.dev By &lt;a href="https://www.linkedin.com/in/idan-asulin/"&gt;Idan Asulin&lt;/a&gt;, Co-Founder &amp;amp; CTO at @Memphis.dev.&lt;/p&gt;




&lt;p&gt;Follow Us to get the latest updates!&lt;br&gt;
&lt;a href="https://github.com/memphisdev/memphis"&gt;Github&lt;/a&gt; • &lt;a href="https://twitter.com/Memphis_Dev"&gt;Twitter&lt;/a&gt; • &lt;a href="https://discord.com/invite/WZpysvAeTf"&gt;Discord&lt;/a&gt;&lt;/p&gt;

</description>
      <category>datascrubbing</category>
      <category>dataprocessing</category>
      <category>dataengineering</category>
    </item>
    <item>
      <title>gRPC vs Message Broker</title>
      <dc:creator>Memphis.dev team</dc:creator>
      <pubDate>Wed, 03 Aug 2022 13:12:00 +0000</pubDate>
      <link>https://forem.com/memphis_dev/grpc-vs-message-broker-1138</link>
      <guid>https://forem.com/memphis_dev/grpc-vs-message-broker-1138</guid>
      <description>&lt;p&gt;&lt;strong&gt;What is gRPC? How does it differ from using a message broker? And what are the perfect use cases for each?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Microservices architecture allows large software systems to be broken down into smaller independent units called services. These services usually come with their own servers and databases. The services in a microservices system need a way to effective way to communicate such that the operations on one service does not affect other services.&lt;/p&gt;

&lt;p&gt;There are several ways to connect services together. If you haven’t designed a microservices system before, the first thing that may come to mind is to create REST endpoints for one service that other services can call. While this can work in systems with a few services, it’s not scalable in larger systems. This is because REST works based on a blocking request-response mechanism.&lt;/p&gt;

&lt;p&gt;A better way to connect microservices is to use a protocol that offers faster request times or use a non-blocking mechanism to get tasks done. Two technologies that enable this are gRPC and message brokers. gRPC sends binary data over the network and so has a faster request time. But gRPC cannot always be used because it requires memory to be allocated on the receiving end to handle response. One way to mitigate such memory allocation is to use a message broker like &lt;a href="https://memphis.dev/" rel="noopener noreferrer"&gt;Memphis&lt;/a&gt;. A message broker allows you to queue messages that will be processed by different services in a microservices system.&lt;/p&gt;

&lt;p&gt;This article goes over the similarities and differences between gRPC and message brokers. You will learn about the pros, cons, and ideal use cases of both technologies.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What is gPRC&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://grpc.io/" rel="noopener noreferrer"&gt;gRPC&lt;/a&gt; (which is short for gRPC Remote Procedural Call) is a communication protocol that is used in place of REST to call functions between a client and a server. The client and the server can be microservices, mobile applications, or CLI tools.&lt;/p&gt;

&lt;p&gt;For a gRPC set up to work, the has to be a client and a server. The client will make a proto request to the server and the server responds with a proto response.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F887cm7pqeie3xg4lcg1b.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F887cm7pqeie3xg4lcg1b.png" alt="image source: https://grpc.io"&gt;&lt;/a&gt;&lt;br&gt;
image source: &lt;a href="https://grpc.io" rel="noopener noreferrer"&gt;https://grpc.io&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;gRPC uses HTTP 2.0 which is a faster than HTTP 1.1 that REST depends on. HTTP 2.0 enforces a binary format by default. This means protocols using HTTP 2.0 need to serialize their data to binary before sending requests over the network. gRPC uses Protobuf, a binary serialization format, for defining data schema.&lt;/p&gt;

&lt;p&gt;gRPC also supports data streaming which allows a single request to return a lot more data than it would typically be in the case of REST. This data streaming can either be a server to client streaming or bi-direction streaming between client to server. Note that the service called the client is the service that makes the initial request while the server is the service that sends the response.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;What is a Message Broker&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;A message broker is a server that contains a persistent, append-only log that stores messages. Think of the message broker as a server that contains different log files that get updated as new data comes in.&lt;/p&gt;

&lt;p&gt;An example use case of is an image processing pipeline that converts a large image into smaller images of various sizes. The conversion task takes an average of 10 seconds per image, but you have a thousand users trying to convert their images into different sizes. You can store each conversion task in a queue within the message broker and the message broker sends the tasks to the conversion server. This process prevents the server from being overwhelmed and keeps your services fast.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkmxjnx45kdykwr1ubyu5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkmxjnx45kdykwr1ubyu5.png" alt="services connected"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;There are several message brokers on the market and your choice of a message broker will depend on your use case. If you’d prefer a cloud-native message broker, then Memphis is a perfect choice. You can also consider brokers such as &lt;a href="https://kafka.apache.org/" rel="noopener noreferrer"&gt;Apache Kafka&lt;/a&gt;, &lt;a href="https://redis.io/" rel="noopener noreferrer"&gt;Redis&lt;/a&gt;, and &lt;a href="https://www.rabbitmq.com/" rel="noopener noreferrer"&gt;RabbitMQ&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Similarities between gRPC and Message Brokers&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Message Format&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;gRPC has similar features to message brokers, the most prominent being the message format. Both gRPC and Memphis for example use &lt;a href="https://developers.google.com/protocol-buffers/docs/proto3" rel="noopener noreferrer"&gt;proto3&lt;/a&gt; data serialization format. The data is serialized to binary and sent over the network to the client. When the data reaches the consuming client, it is deserialized back to a form the client can use, like JSON.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Streaming support&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;gRPC and message brokers also support streaming. This means data can be sent from server to client as soon as the data is produced. An example of this is sending resized images from the image resizer service to the image watermarking service as soon as the image is resized. The image data can either be queued in a message broker or sent in an RPC call. In either case, the image data is streamed from the image resizer service to the image watermarking service.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Language agnostic&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;gRPC and message brokers are mostly language agnostic, with support in a variety of languages. You can use gRPC in the most widely used languages like Python, JavaScript, Java, Go, and C#. You can also connect to a message broker like Memphis using SDKs in Python, JavaScript, and Go.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;How gPRC differs from a message broker&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;While gRPC has similar use cases as message brokers, they differ in so many other ways. A message broker typically stores its data on a disk while gRPC operates on the RAM. A message broker is installed as an executable on a server while gRPC depends on HTTP 2.0. This section goes into detail on how gRPC differs from a message broker.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Disk storage and Partitioning&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;A message broker serves as a persistent log data structure store, and so, it works on main memory or disk storage. This allows messages to be recovered if there is a server outage. A message broker like Memphis stores data in “stations” and these stations are partitioned across multiple brokers. This distributed storage increases the fault tolerance of the system.&lt;/p&gt;

&lt;p&gt;gRPC on the other hand works with RAM because it operates at the source code layer. This also means that gRPC calls are not persisted to disk. Ideally, gRPC can be combined with a message broker to increase the overall performance of a distributed system.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Stream buffering&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;gRPC being a messaging protocol cannot buffer streaming data. This is unlike message brokers that can store millions of streamed messages as they are produced. An example scenario is streaming temperature data from a thermometer in a factory. If real-time processing is required, there has to be some server processing the data as it comes. This streaming process can overwhelm the server and so, there needs to be a way to process streams in real-time without overwhelming the server. A message broker is the ideal tool to handle this situation.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Deployment method&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;gRPC is a protocol based on HTTP 2.0 and works at the runtime layer of the software stack. Popular language runtimes like Node.js, Python, and Java 8 have already implemented HTTP 2.0 and so, support gRPC. A software library is usually used to enable gRPC connections within a language runtime.&lt;/p&gt;

&lt;p&gt;A message broker on the other hand is an executable that is installed on a server and uses the memory and disk space to function. Memphis for example can be run on Docker containers or on Kubernetes. Clients that connect to a message broker can use REST APIs or gRPC. A user will build an SDK in most cases to ease the process of interacting with a message broker. Memphis for example has language support in Go, Python, and JavaScript.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;Conclusion&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;While both gRPC and message brokers are used for inter-service communication. gRPC is more suited for inter-service calls while message brokers are more suited for task queueing.&lt;/p&gt;

&lt;p&gt;The Memphis message broker is the perfect go-to broker for your inter-service communication needs. Memphis is easy to deploy and easy to use. You can immediately start using Memphis if you have Docker installed. &lt;a href="https://docs.memphis.dev/memphis-new/getting-started/1-installation" rel="noopener noreferrer"&gt;Head over to the quick start to start using Memphis today.&lt;/a&gt;&lt;/p&gt;




&lt;p&gt;Special thanks to &lt;a href="https://twitter.com/IdanAsulin1" rel="noopener noreferrer"&gt;Idan Asulin&lt;/a&gt; for the article&lt;/p&gt;

</description>
      <category>github</category>
      <category>opensource</category>
      <category>microservices</category>
      <category>architecture</category>
    </item>
    <item>
      <title>The end of poison messages!</title>
      <dc:creator>Memphis.dev team</dc:creator>
      <pubDate>Wed, 20 Jul 2022 18:31:17 +0000</pubDate>
      <link>https://forem.com/memphis_dev/the-end-of-poison-messages-1fj1</link>
      <guid>https://forem.com/memphis_dev/the-end-of-poison-messages-1fj1</guid>
      <description>&lt;p&gt;Have you ever got a call in the middle of the night saying “Infrastructure looks ok, but some service is not consuming data / messages get redelivered. Please figure it out”&lt;/p&gt;

&lt;p&gt;Redelivered messages are also called “Poison messages”.&lt;/p&gt;

&lt;p&gt;Poison messages = Messages that cause a consumer to repeatedly require a delivery (possibly due to a consumer failure) such that the message is never processed completely and acknowledged so that it can be stopped being sent again to the same consumer.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Example&lt;/strong&gt;: Some message on an arbitrary queue pushed/pulled to or by a consumer. That consumer, for some reason, doesn’t succeed in handling it. It can be due to a bug, an unknown schema, resources issue, etc…&lt;/p&gt;

&lt;p&gt;In RabbitMQ, for example, quorum queues keep track of the number of unsuccessful delivery attempts and expose it in the “x-delivery-count” header that is included with any redelivered message. It is possible to set a delivery limit for a queue using a policy argument, delivery-limit. When a message has been returned more times than the limit the message will be dropped or dead-lettered (if a DLX is configured).&lt;/p&gt;

&lt;p&gt;It is known to any developer that uses a queue / messaging bus that poison messages should be taken care of, and it’s the developer’s responsibility to do so.&lt;/p&gt;

&lt;p&gt;In RabbitMQ, the most common, simple solution would be to enable DLX (dead-lettered queue), but it doesn’t end there.&lt;/p&gt;

&lt;p&gt;Recovering a poison message is just the first part, and the developer must also understand what causes this behavior and mitigate the issue.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;Solutions&lt;/strong&gt;&lt;br&gt;
While there is the classic solution of committing/acknowledging a message as soon as possible, it’s not the best option for use cases requiring ensuring messages are acknowledged only when finished being handled.&lt;/p&gt;

&lt;p&gt;Other approaches –&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How to handle unacknowledged messages in RabbitMQ&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Turn on DLX&lt;/li&gt;
&lt;li&gt;Configure the DLX&lt;/li&gt;
&lt;li&gt;Place a routing key&lt;/li&gt;
&lt;li&gt;Build a dedicated consumer pointed to the DLX&lt;/li&gt;
&lt;li&gt;Consume the unacknowledged messages&lt;/li&gt;
&lt;li&gt;Fix the code/events&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpvhm2z719kmls4e7xgj0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpvhm2z719kmls4e7xgj0.png" alt="FAQ: When and how to use the RabbitMQ Dead Letter Exchange - CloudAMQP"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How to handle unacknowledged messages in Apache Kafka&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;There is no out-of-the-box redelivery/recovery of such messages.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Ensure there are logs within the code, tracking exceptions, and export to pagerduty/datadog/new relic/etc…&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;If the retention of a message is too small, it will be gone before the developer gets the chance to debug it. In most cases, the message will not be unique and can the loss, but in other, like transactions / atomic requests, it is. To mitigate this, a wrapper that provides this functionality should be made. A great example of such that provides that ability, and more is Wix’s &lt;a href="https://github.com/wix/greyhound" rel="noopener noreferrer"&gt;GreyHound&lt;/a&gt;. Definitely worth taking a look.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;There are other use cases that utilize cache DB of some kind to persist the message while being processed before being getting committed.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz6jzc4kdzm8uotaquimt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz6jzc4kdzm8uotaquimt.png" alt="architecture"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;Unacknowledged messages in &lt;a href="https://memphis.dev" rel="noopener noreferrer"&gt;Memphis.dev&lt;/a&gt;&lt;/strong&gt; &lt;/p&gt;

&lt;p&gt;When we started to refine our approach, understand the needs, validate the experience, and craft the value it will bring to our users – three key objectives led our process – &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;The entry point of a user to the issue.&lt;/li&gt;
&lt;li&gt;Quickly understand the problematic consumer and the root cause of the issue.&lt;/li&gt;
&lt;li&gt;Fix code. The developer must debug the fix with a similar message to ensure it works.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1l2278hzvuk9cljrsy3u.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1l2278hzvuk9cljrsy3u.jpeg" alt="Alert-Fix-Root"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;1 - &lt;strong&gt;Define the trigger&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In Memphis broker, at the SDK level, we use a parameter called “maxMsgDeliveries.”&lt;br&gt;
Once this threshold is reached, the broker will not repeatedly send the “failed-to-ack” message to the same consumer group.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqmafrrfwcf7hpacakzdi.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqmafrrfwcf7hpacakzdi.jpeg" alt="Poison message"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;2 - &lt;strong&gt;Notification&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Memphis broker senses the event of crossing the “maxMsgDeliveries = 2” per station, per consumer group.&lt;/li&gt;
&lt;li&gt;Persist the time_sent, and payload of the redelivered message to the Memphis file-based store for 3 hours.&lt;/li&gt;
&lt;li&gt;Mark the message as poisoned by a specific consumer.&lt;/li&gt;
&lt;li&gt;Create an alert.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwi7bi7hwgcr7s51m3q1z.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwi7bi7hwgcr7s51m3q1z.jpeg" alt="notification"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcce1z64komeonutpm9wx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcce1z64komeonutpm9wx.png" alt="notification-2"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;3 - &lt;strong&gt;Identify the consumer group which didn't acknowledge the message&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Instead of going around through logs and multiple consumer groups, we wanted to narrow the finding to the minimum, so a simple click on the “real-time tracing” will lead to a graph screen showing the CGs which passed the redelivery threshold.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkpvtgw2r9xmh8fxeqyic.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkpvtgw2r9xmh8fxeqyic.png" alt="Group"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;4 - &lt;strong&gt;Fix and Debug&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;After the developer understand what went wrong and creates a fix, before pushing the code which will lead to probably more adjustments when new messages will arrive, we have created the “Resend” mechanism which will push the unacknowledged message as many times as needed (Until ACK) to the consumer group that was not able to acknowledge the message in the first place.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fthbadh83g7n0058vhl10.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fthbadh83g7n0058vhl10.jpeg" alt="Resend"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The unacknowledged message will be retrieved from Memphis internal DB, ingested into a dedicated station per station, per CG, and only upon request. Next, it will be pushed to the unacknowledged CG – WITHOUT ANY CODE CHANGE, using the same already-configured emit.&lt;/p&gt;

&lt;p&gt;That’s it. No need to create a persistency process, cache DB, DLQ, or massy logic. You can be sure no message is lost.&lt;/p&gt;




&lt;p&gt;&lt;a href="https://memphis.dev/newsletter" rel="noopener noreferrer"&gt;Join 4500+ others and sign up for our data engineering newsletter&lt;/a&gt;&lt;/p&gt;




&lt;p&gt;Follow Us to get the latest updates!&lt;br&gt;
&lt;a href="https://github.com/memphisdev/memphis" rel="noopener noreferrer"&gt;Github&lt;/a&gt; • &lt;a href="https://docs.memphis.dev/memphis/getting-started/readme" rel="noopener noreferrer"&gt;Docs&lt;/a&gt; • &lt;a href="https://discord.com/invite/DfWFT7fzUu" rel="noopener noreferrer"&gt;Discord&lt;/a&gt;&lt;/p&gt;




&lt;p&gt;This article is written by &lt;a href="https://twitter.com/AvrahamNeeman" rel="noopener noreferrer"&gt;Avraham Neeman&lt;/a&gt;&lt;/p&gt;

</description>
      <category>opensource</category>
      <category>product</category>
      <category>tooling</category>
      <category>github</category>
    </item>
    <item>
      <title>Here is why you need a message broker</title>
      <dc:creator>Memphis.dev team</dc:creator>
      <pubDate>Thu, 07 Jul 2022 07:03:23 +0000</pubDate>
      <link>https://forem.com/memphis_dev/here-is-why-you-need-a-message-broker-31g0</link>
      <guid>https://forem.com/memphis_dev/here-is-why-you-need-a-message-broker-31g0</guid>
      <description>&lt;p&gt;Among the open-source projects my college buddies (and my future co-founders of memphis.dev) and I built, you can find “Makhela”, a Hebrew word for choir.&lt;br&gt;
For the sake of simplicity – We will use “Choir”.&lt;/p&gt;

&lt;p&gt;“Choir” was an open-source OSINT (Open-source intelligent) project focused on gathering context-based connections between social profiles using AI models like LDA and topic modeling, written in Python to explain what the world discusses over a specific domain and by high-ranking influencers in that domain and focus on what’s going on at the margins. For proof-of-concept or MVP we used a single data source, fairly easy for integrations – Twitter.&lt;/p&gt;

&lt;p&gt;The graph below was the “brain” behind “Choir”. The brain autonomously grows and analyzes new vertexes and edges based on incremental changes in the corpus and fresh ingested data.&lt;/p&gt;

&lt;p&gt;Each vertex symbolizes a profile, a persona, and each edge emphasizes (a) who connects to who. (b) Similar color = Similar topic.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdeb1ztrigjsqgp37z469.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdeb1ztrigjsqgp37z469.png" alt="Makhela graph"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Purple&lt;/strong&gt; = Topic 1&lt;br&gt;
&lt;strong&gt;Blue&lt;/strong&gt; = Topic 2&lt;br&gt;
&lt;strong&gt;Yellow&lt;/strong&gt; = Marginal topic&lt;/p&gt;

&lt;p&gt;After a reasonable amount of research, dev time, and a lot of troubleshooting &amp;amp; debug, things started to look good.&lt;/p&gt;

&lt;p&gt;Among the issues we needed to solve were:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Understand the connection between profiles&lt;/li&gt;
&lt;li&gt;Build a ranking algorithm for adding more influencers&lt;/li&gt;
&lt;li&gt;Transform the schema of incoming data to a shape the analysis side knows how to handle&lt;/li&gt;
&lt;li&gt;Near real-time is crucial - Enrich each tweet with external data&lt;/li&gt;
&lt;li&gt;Adaptivity to "Twitter" rate limit&lt;/li&gt;
&lt;li&gt;Each upstream or schema change crashed the analysis functions&lt;/li&gt;
&lt;li&gt;Sync between collection and analysis, which were two different components&lt;/li&gt;
&lt;li&gt;Infrastructure&lt;/li&gt;
&lt;li&gt;Scale&lt;/li&gt;
&lt;/ul&gt;




&lt;p&gt;As with any startup or early-stage project, we built “Choir” as MVP, Working solely with “Twitter”, and it looked like this -&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7qoj01i3mzlsmc64qims.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7qoj01i3mzlsmc64qims.jpeg" alt="Makhela Architecture 1"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fem8qmipva27helhozip0.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fem8qmipva27helhozip0.jpeg" alt="Makhela collector"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The “Collector” is a monolith, python-written application that basically collects and refines the data for analysis and visualization in batches and in a static timing every couple of hours.&lt;/p&gt;

&lt;p&gt;However, as the collected data and its complexity grew, problems started to arise. Each batch processing cycle analysis took hours for no good reason in terms of the capacity of the collected data (Hundreds of Megabytes at most!!). More on the rest of the challenges in the next sections.&lt;/p&gt;

&lt;p&gt;Fast forward a few months later, users started to use “Choir”!!!&lt;br&gt;
Not just using, but engaging, paying, and raising feature requests.&lt;br&gt;
Any creator’s dream!&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;But then it hit us.&lt;/strong&gt; &lt;/p&gt;

&lt;p&gt;(a) “Twitter” is not the center of the universe, and we need to expand “Choir” to more sources.&lt;/p&gt;

&lt;p&gt;(b) Any minor change in the code breaks the entire pipeline.&lt;/p&gt;

&lt;p&gt;(c) Monolith is a death sentence to a data-driven app performance-wise.&lt;/p&gt;

&lt;p&gt;As with every eager-to-run project that starting to get good traction, fueling that growth and user base is your number 1, 2, and 3 priority,&lt;/p&gt;

&lt;p&gt;and the last thing you want to do at this point is to go back and rebuild your framework. You want to continue the momentum.&lt;/p&gt;

&lt;p&gt;With that spirit in our mind, we said “Let’s add more data sources and refactor in the future”. A big Mistake indeed.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;Challenges in scaling a data-driven application&lt;/strong&gt; &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Each new data source requires a different schema transformation&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Each schema change causes a chain of reaction downstream to the rest of the stages in the pipeline&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Incremental / climbing collection. While you can wait for an entire batch collection to finalize and then save it to the DB, applications often crash. Imagine you’re doing a very slow collection and in the very last record, the collection process crashes.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;In a monolith architecture, it’s hard to scale out the specific functions which require more power&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Analysis functions often require modifications, upgrades, and algorithms to get better results, which are made by using or requiring different keys from the collectors.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;While there is no quick fix, what we can do is build a framework to support such requirements.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;Solutions&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Option 1&lt;/strong&gt; – Duplicate the entire existing process to another source, for example, “Facebook”.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fynzq8in54764xi45828k.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fynzq8in54764xi45828k.jpeg" alt="Architecture alternative"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In addition to duplicating the collector, we needed to –&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Maintain two different schemas (Nightmare)&lt;/li&gt;
&lt;li&gt;Entirely different analysis functions. The connections between profiles on Facebook and “Twitter” are different and require different objective relationships.&lt;/li&gt;
&lt;li&gt;The analyzer should be able to analyze the data in a joined manner, not individually; therefore, any minor change in source X directly affects the analyzer and often crashes it down.&lt;/li&gt;
&lt;li&gt;Double maintenance&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;And the list goes on…&lt;br&gt;
As a result, it cant scale.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Option 2&lt;/strong&gt; - Here it comes. Using a message broker!&lt;/p&gt;

&lt;p&gt;I want to draw a baseline. A &lt;a href="https://memphis.dev/blog/grpc-vs-message-broker/" rel="noopener noreferrer"&gt;message broker&lt;/a&gt; is not the solution but a supporting framework or a tool to enable branched, growing data-driven architectures.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What is a message broker?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;“A message broker is an architectural pattern for message validation, transformation, and routing. It mediates communication among applications[vague], minimizing the mutual awareness that applications should have of each other in order to be able to exchange messages, effectively implementing decoupling.[4]”. Wikipedia.&lt;/p&gt;

&lt;p&gt;Firstly, let’s translate it to something we can grasp better.&lt;/p&gt;

&lt;p&gt;A message broker is a temporary data store. Why temporary? Because each piece of data within it will be removed after a certain time, defined by the user. Therefore, the pieces of data within the message broker are called “messages.” Each message usually weighs a few bytes to a few megabytes. &lt;/p&gt;

&lt;p&gt;Around the message broker, we can find producers and consumers.&lt;/p&gt;

&lt;p&gt;Producer = The “thing” that pushes the messages into the message broker.&lt;br&gt;
Consumer = The “thing” that consumes the messages from the message broker.&lt;/p&gt;

&lt;p&gt;“Thing” means system/service/application/IoT/some objective that connects with the message broker and exchanges data.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Small note&lt;/em&gt; the same service/system/app can act as a producer and consumer at the same time.&lt;/p&gt;

&lt;p&gt;Messaging queues derive from the same family, but there is a crucial difference between a broker and a queue.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;MQ uses the term publish and subscribe. The MQ itself pushes the data to the consumers and not the other way (consumer pulls data from the broker).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Ordering is promised. Messages will be pushed in the order they receive. Some systems require it.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The ratio between a publisher (producers) and subscribers is 1:1. Having said it, modern versions can achieve it by some features like exchange and more.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Famous message brokers/queues are Apache Kafka, RabbitMQ, Apache Pulsar, and our own Memphis.dev. &lt;a href="https://memphis.dev/blog/apache-kafka-use-cases-when-to-use-it-when-not-to/" rel="noopener noreferrer"&gt;Kafka use cases&lt;/a&gt; span from event streaming to real-time data processing. One might consider using Memphis.dev instead of Kafka due to the ease of deployment and developer friendliness it provides.&lt;/p&gt;

&lt;p&gt;Still with me? Awesome!&lt;/p&gt;

&lt;p&gt;Thus, let’s understand how using a message broker helped “Choir” to scale.&lt;/p&gt;




&lt;p&gt;Instead of doing things like this -&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3501zufre1ob1wlozki0.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3501zufre1ob1wlozki0.jpeg" alt="Makhela Architecture 1"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;By decoupling the app to smaller microservices, and orchestrating the flow using a message broker, it therefore turned into this – &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpgra0u1abhpg9xmaazko.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpgra0u1abhpg9xmaazko.jpeg" alt="Message broker architecture"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Starting from the top-left corner, each piece of data (tweet/post) inserted into the system automatically triggers the entire process and flows between the different stages.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Collection&lt;/strong&gt;. The three collectors search each new profile added to the community in parallel. If any more data source/social network is needed – it’s been developed on the side, and once ready, start listening for incoming events. Allows infinite scale of sources, ability to work on the specific source without disrupting the others, micro-scaling for better performance of each source individually, and more.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Transformation&lt;/strong&gt;. Once the collection is complete, results will be pushed to the next stage, “Schema transformation,” where the schema transformation service will transform the events’ schemas into a shape the analysis function can interpret. It enables a “single source of truth” regarding schema management, so in case of upstream change, all is needed to reach out to this service and debug the issue. In a more robust design, it can also integrate with an external schema registry to make maintenance even more effortless.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxdbgguhpiovlv4f8yqwv.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxdbgguhpiovlv4f8yqwv.jpeg" alt="Schema change"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Analysis&lt;/strong&gt;. Each piece of event is sent to the analysis function transformed, and in a shape the analysis function can interpret. In “Choir” we used different AI models. Scaling it was impossible, so moving to analysis per event definitely helped.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Save&lt;/strong&gt;. Creates an abstraction between “Choir” and the type of database and the ability to batch several insertions to a single batch instead of request per event.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;




&lt;p&gt;The main reason behind my writing is to emphasize the importance of implementing a message broker pattern and technology as early as possible to avoid painful refactoring in the future. Message brokers, by default, enable you to build &lt;a href="https://memphis.dev/blog/building-a-scalable-search-architecture/" rel="noopener noreferrer"&gt;scalable architectures&lt;/a&gt; because they remove the tight coupling constraints.&lt;/p&gt;

&lt;p&gt;Yes, your roadmap and added features are important, Yes it will take a learning curve, yes it might look like an overkill solution for your stage, but when it comes to a data-driven use case, the need for scale will reveal quickly in performance, agility, feature additions, modifications, and more. Bad design decisions or a lack of proper framework will burn out your resources. It is better to build agile foundations, not necessarily enterprise-grade, before reaching the phase you are overwhelmed by users and feature requests.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;To conclude, the entry barrier for a message broker is definitely worth your time.&lt;/strong&gt;&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;Special thanks to &lt;a href="https://twitter.com/Yanivbh1" rel="noopener noreferrer"&gt;Yaniv Ben-Hemo&lt;/a&gt; for the writing&lt;/strong&gt;&lt;/p&gt;




&lt;p&gt;&lt;a href="https://memphis.dev/newsletter" rel="noopener noreferrer"&gt;Join 4500+ others and sign up for our data engineering newsletter&lt;/a&gt;&lt;/p&gt;




&lt;p&gt;Follow Us to get the latest updates!&lt;br&gt;
&lt;a href="https://github.com/memphisdev/memphis" rel="noopener noreferrer"&gt;Github&lt;/a&gt; • &lt;a href="https://docs.memphis.dev/memphis/getting-started/readme" rel="noopener noreferrer"&gt;Docs&lt;/a&gt; • &lt;a href="https://discord.com/invite/DfWFT7fzUu" rel="noopener noreferrer"&gt;Discord&lt;/a&gt;&lt;/p&gt;

</description>
      <category>beginners</category>
      <category>architecture</category>
      <category>opensource</category>
      <category>bigdata</category>
    </item>
    <item>
      <title>How to contribute to an open-source project</title>
      <dc:creator>Memphis.dev team</dc:creator>
      <pubDate>Tue, 05 Jul 2022 08:38:20 +0000</pubDate>
      <link>https://forem.com/memphis_dev/how-to-contribute-to-an-open-source-project-2m1c</link>
      <guid>https://forem.com/memphis_dev/how-to-contribute-to-an-open-source-project-2m1c</guid>
      <description>&lt;p&gt;One of the During my career as a software developer, I started getting involved in some open-source communities and actively contributor, I never thought to myself that it will leverage my knowledge and experience to that level it did.&lt;/p&gt;

&lt;p&gt;Hence, in the spirit of open-source, I co-founded Memphis.dev together with my 3 best friends from college – A &lt;a href="https://memphis.dev/blog/grpc-vs-message-broker/"&gt;message broker&lt;/a&gt; for developers made out of devs’ struggles with using message brokers, building complex data/event-driven apps, and troubleshooting them.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;What is an open-source software/project?&lt;/strong&gt;&lt;br&gt;
Open-source software (OSS) is software whose source code in some shape is open to the public, making it available for use, modification, and distribution with its original rights. Therefore, programmers who have access to source code can change the code by adding features to the project or software, changing it, or fixing parts that aren’t working properly. OSS typically includes a license (Apache, BSD, MIT, GNU) that describes what are the constraints around the project and how “flexible” is the project.&lt;br&gt;
Read more about it in &lt;a href="https://snyk.io/learn/open-source-licenses/"&gt;Snyk's article&lt;/a&gt;.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;Where to find interesting open-source projects to contribute?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;So usually OSS contributors start to contribute to projects they are making use of. For example, a developer who works with Redis finds it interesting to go deep into Redis internals, understand what’s going under the hood, fix bugs, or add new features.&lt;/p&gt;

&lt;p&gt;Specifically for developers without any former experience working with open-source products, my personal suggestion would be to go over the CNCF projects page. CNCF is the foundation of cloud-native, open-source projects. Furthermore, among the backed projects you can find Kubernetes, Prometheus, and much more. Undoubtedly, it is a really good place to find some interesting projects to start contributing to.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;The contribution process&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Find a nice project for example Redis, NATS, Memphis{dev}&lt;/li&gt;
&lt;li&gt;Connect with the project’s community (Slack channels, Discord, website, GitHub repo, etc.)&lt;/li&gt;
&lt;li&gt;Search for contribution guidelines. Often a file located within the project’s main repo called &lt;a href="https://github.com/memphisdev/memphis/blob/master/CONTRIBUTING.md"&gt;CONTRIBUTING.md&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Fork the GitHub repository — Create a copy of the entire repo on your GitHub account&lt;/li&gt;
&lt;li&gt;Creating a separate branch from the main branch&lt;/li&gt;
&lt;li&gt;Code your changes (bug fixes, new features, etc.)&lt;/li&gt;
&lt;li&gt;Push&lt;/li&gt;
&lt;li&gt;Create a pull request of your branch to the upstream repo&lt;/li&gt;
&lt;li&gt;One of the repo maintainers reviews you PR (Usually happens automatically)&lt;/li&gt;
&lt;li&gt;Fix issues and comments left by the maintainer&lt;/li&gt;
&lt;li&gt;Awaiting your code to be merged&lt;/li&gt;
&lt;li&gt;Celebrate your first contribution with some cold beer 🙂&lt;/li&gt;
&lt;/ol&gt;




&lt;p&gt;&lt;a href="https://memphis.dev/newsletter"&gt;Join 4500+ others and sign up for our data engineering newsletter&lt;/a&gt;&lt;/p&gt;




&lt;p&gt;Follow Us to get the latest updates!&lt;br&gt;
&lt;a href="https://github.com/memphisdev/memphis"&gt;Github&lt;/a&gt; • &lt;a href="https://docs.memphis.dev/memphis/getting-started/readme"&gt;Docs&lt;/a&gt; • &lt;a href="https://discord.com/invite/DfWFT7fzUu"&gt;Discord&lt;/a&gt;&lt;/p&gt;




&lt;p&gt;Originally posted on &lt;a href="https://memphis.dev/blog/how-to-contribute-to-an-open-source-project/"&gt;Memphis{dev} blog&lt;/a&gt;&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;Special thanks to &lt;a href="https://twitter.com/IdanAsulin1"&gt;Idan Asulin&lt;/a&gt; for this amazing article!&lt;/strong&gt;&lt;/p&gt;

</description>
      <category>opensource</category>
      <category>github</category>
      <category>contributorswanted</category>
      <category>devrel</category>
    </item>
    <item>
      <title>Fix: invalid apiVersion “client.authentication.k8s.io/v1alpha1”</title>
      <dc:creator>Memphis.dev team</dc:creator>
      <pubDate>Wed, 22 Jun 2022 08:31:39 +0000</pubDate>
      <link>https://forem.com/team_memphis/fix-invalid-apiversion-clientauthenticationk8siov1alpha1-1f1e</link>
      <guid>https://forem.com/team_memphis/fix-invalid-apiversion-clientauthenticationk8siov1alpha1-1f1e</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--I0Gmhwfy--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://community.ops.io/remoteimages/uploads/articles/vaqbm5uz7awzjv4sa428.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--I0Gmhwfy--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://community.ops.io/remoteimages/uploads/articles/vaqbm5uz7awzjv4sa428.png" alt="aws + k8s" width="301" height="167"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I recently encountered this error while trying to run ‘kubectl’ commands over our AWS EKS cluster, not sure if it's shared by more k8s vendors.&lt;/p&gt;

&lt;p&gt;In short, you are using an incompatible ‘AWS CLI’ version with your current AWS EKS version.&lt;/p&gt;

&lt;p&gt;To fix it — Update your ‘AWS CLI’&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Installing or updating the latest version of the AWS CLI&lt;/strong&gt;&lt;br&gt;
This topic describes how to install or update the latest release of the AWS Command Line Interface (AWS CLI) on…&lt;br&gt;
&lt;a href="//docs.aws.amazon.com"&gt;docs.aws.amazon.com&lt;/a&gt;&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;For Linux x86 (64-bit)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;code&gt;curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip" &amp;amp;&amp;amp; unzip awscliv2.zip &amp;amp;&amp;amp; sudo ./aws/install&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;For MacOS&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;code&gt;curl "https://awscli.amazonaws.com/AWSCLIV2.pkg" -o "AWSCLIV2.pkg"&lt;br&gt;
sudo installer -pkg AWSCLIV2.pkg -target /&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;For Window&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;code&gt;C:\&amp;gt; msiexec.exe /i https://awscli.amazonaws.com/AWSCLIV2.msi&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Hope it helped. Cheers.&lt;/p&gt;

&lt;p&gt;Thanks to Yaniv Ben-hemo for writing &amp;amp; Idan Asulin For the QA.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Apache Kafka &amp; Memphis.dev</title>
      <dc:creator>Memphis.dev team</dc:creator>
      <pubDate>Tue, 31 May 2022 13:30:40 +0000</pubDate>
      <link>https://forem.com/memphis_dev/memphisdev-and-apache-kafka-3h67</link>
      <guid>https://forem.com/memphis_dev/memphisdev-and-apache-kafka-3h67</guid>
      <description>&lt;h2&gt;
  
  
  &lt;strong&gt;What is Memphis.dev, what is Apache Kafka, and what are the strength and weaknesses of each framework&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--BZCSFo-j--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xbvzowdkofftp2byoyts.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--BZCSFo-j--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xbvzowdkofftp2byoyts.png" alt="memphis vs kafka" width="800" height="135"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Let’s start!&lt;/p&gt;

&lt;p&gt;The complete comparison table is at the bottom.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What is &lt;a href="https://memphis.dev"&gt;Memphis.dev&lt;/a&gt;?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;A distributed message broker designed and constructed to make developers’ lives who work around event-driven software incredibly easier with event-level observability and troubleshooting tools. &lt;a href="https://memphis.dev"&gt;memphis.dev&lt;/a&gt; started as a fork of &lt;a href="https://nats.io"&gt;nats.io&lt;/a&gt; project (since 2011), written in GoLang, and creating its own stream on top.&lt;/p&gt;

&lt;p&gt;Memphis.dev uses the same paradigm as Apache Kafka of produce-consume.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Memphis.dev Important objects&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Broker&lt;/li&gt;
&lt;li&gt;Factories&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.memphis.dev/memphis-new/dashboard-ui/stations"&gt;Stations&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Producers&lt;/li&gt;
&lt;li&gt;Consumers / Consumer groups&lt;/li&gt;
&lt;li&gt;UI&lt;/li&gt;
&lt;li&gt;CLI&lt;/li&gt;
&lt;li&gt;Kubernetes&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--eYRVSb4_--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/tj7y22q2vrgkgvu111en.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--eYRVSb4_--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/tj7y22q2vrgkgvu111en.png" alt="Memphis.dev Architecture" width="790" height="664"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Memphis.dev flow&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Memphis robust version deployed over Kubernetes using helm chart.&lt;/p&gt;

&lt;p&gt;With Memphis deployment, also deployed is the UI, which provides the user with complete control over the entire Memphis cluster.&lt;/p&gt;

&lt;p&gt;Compared to other terminologies like topics and queues, Memphis has stations that are grouped under factories.&lt;/p&gt;

&lt;p&gt;Apps produce (write) and consume (read) data to and from stations.&lt;/p&gt;

&lt;p&gt;To scale the performance and redundancy of consumers in a super easy manner, several consumers of the same kind can be grouped within the same consumer group.&lt;/p&gt;

&lt;p&gt;Messages remain in the station until they hit the defined retention (defined at station creation), but will be consumed exactly once by the same consumer group.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--XSdAr2oP--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/rfjdpx4d9t320d4h0aht.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--XSdAr2oP--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/rfjdpx4d9t320d4h0aht.png" alt="Memphis.dev message ordering" width="800" height="655"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Persistency (Where data is being saved)&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Memory — Higher performance, higher cost, redundant for up to two failed brokers.&lt;/li&gt;
&lt;li&gt;File — Lower performance, lower cost, redundant for an entire cluster breakdown.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Both are replicated across the brokers as defined by the user.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--VXqPfxBw--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/kwzyjxgu3uoxm9n7cbw0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--VXqPfxBw--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/kwzyjxgu3uoxm9n7cbw0.png" alt="Memphis.dev persistence mechanism" width="800" height="838"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Redundancy / HA&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Cluster-mode — Made by multiple brokers (min. 3). Data is replicated across brokers. Redundant across AZs.&lt;/li&gt;
&lt;li&gt;Standalone — Made by a single broker.&lt;/li&gt;
&lt;/ul&gt;




&lt;p&gt;&lt;strong&gt;What is Apache Kafka?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Apache Kafka is a distributed event store and stream-processing platform. It is an open-source system developed by the Apache Software Foundation written in Java and Scala. Well-known and 2nd most popular event store in the world after RabbitMQ.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Apache Kafka Important players&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Consumer / Consumer groups&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Producer&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Kafka connect&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Topic and topic partition&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Kafka streams&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Broker&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Zookeeper (Will be moved soon)&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Apache Kafka flow&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Apache Kafka is one of the most mature message brokers in the market.&lt;/p&gt;

&lt;p&gt;Producers send a message record to a topic. A topic is a category or feed name to which records are published. Consumers subscribe to a topic and pull messages from it.&lt;/p&gt;

&lt;p&gt;In Kafka, messages remain in the topic, also if they were consumed (limit time is defined by retention policy)&lt;/p&gt;

&lt;p&gt;Heavy memory consumer for performance.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--PY5Hcywl--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/30wkpfi9q46a8gu9wvl2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--PY5Hcywl--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/30wkpfi9q46a8gu9wvl2.png" alt="apache kafka" width="602" height="363"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Persistency (Where data is being saved)&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Memory&lt;/li&gt;
&lt;li&gt;File (log files)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Redundancy / HA&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Kafka does it by using Zookeeper to manage the state of the cluster.&lt;/p&gt;

&lt;p&gt;In &lt;a href="https://cwiki.apache.org/confluence/display/KAFKA/KIP-833%3A+Mark+KRaft+as+Production+Ready"&gt;version 3.3&lt;/a&gt;, KRaft will be available for production use and remove the need for Zookeeper.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--lf86y80B--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/lx626ww0rasr0jo9afms.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--lf86y80B--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/lx626ww0rasr0jo9afms.jpeg" alt="kafka ecosystem" width="800" height="470"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;Bottom line&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Apache Kafka has a strong maturity and large community, as well as battle-tested against large workloads and distributed environments, but it comprises multiple technical challenges in both ops, developer onboarding, troubleshooting, and cost.&lt;/p&gt;

&lt;p&gt;Memphis.dev has a much smaller (yet a growing) community, and less documentation, but eliminates almost completely the need for operations and tunings, developers can onboard autonomously, run on any Kubernetes, and comes with observability and troubleshooting features that make it loveable by devs who are starting their way through the forests of event-driven and real-time.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;Comparison Table&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ksg37o2D--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/i1lod3d0avtjqls3pwpk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ksg37o2D--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/i1lod3d0avtjqls3pwpk.png" alt="comperison" width="800" height="677"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--KZ-bNOo7--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7wlnogllt73tn8iewdt5.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--KZ-bNOo7--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7wlnogllt73tn8iewdt5.jpeg" alt="memphis vs kafka" width="800" height="800"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Thanks for reading!&lt;/p&gt;

</description>
    </item>
    <item>
      <title>How to build your own “DoorDash” app</title>
      <dc:creator>Memphis.dev team</dc:creator>
      <pubDate>Wed, 25 May 2022 07:33:18 +0000</pubDate>
      <link>https://forem.com/memphis_dev/how-to-build-your-own-doordash-app-1n8o</link>
      <guid>https://forem.com/memphis_dev/how-to-build-your-own-doordash-app-1n8o</guid>
      <description>&lt;p&gt;Apractical, (relatively) easy-to-do tutorial for developing an event-driven, distributed food delivery app, just like “Uber Eats” or “Wolt”.&lt;/p&gt;

&lt;p&gt;Many thanks to Dhanush Kamath for the use case and supporting article.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Meet Fastmart — The fastest and most reliable food delivery app ever built.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--uVMCvwLb--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/x1r49b3ovonnckrqjgd3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--uVMCvwLb--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/x1r49b3ovonnckrqjgd3.png" alt="Fastmart" width="700" height="175"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The technology stack we will use -&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--tL3t3ajS--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/o25zj5x3drt9ss2gec3r.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--tL3t3ajS--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/o25zj5x3drt9ss2gec3r.png" alt="Technologies" width="700" height="73"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Node.js as our primary dev-lang&lt;/li&gt;
&lt;li&gt;MongoDB for order persistency&lt;/li&gt;
&lt;li&gt;Memphis is a message broker for developers&lt;/li&gt;
&lt;li&gt;Kubernetes will host our microservices&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--EUr4dEpj--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/wdegrz0vfk7k5wp6qr8t.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--EUr4dEpj--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/wdegrz0vfk7k5wp6qr8t.png" alt="Event-driven architecture using Message Broker" width="454" height="196"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;High-Level Plan&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Install Minikube using brew package manager.&lt;/li&gt;
&lt;li&gt;Install Memphis over Minikube.&lt;/li&gt;
&lt;li&gt;Clone “Fastmart” GitHub repo.&lt;/li&gt;
&lt;li&gt;Review the code, the different services, and how they interact with each other.&lt;/li&gt;
&lt;li&gt;Deploy “Fastmart” over Kubernetes.&lt;/li&gt;
&lt;li&gt;Order food!&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Let’s start!&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;1.&lt;strong&gt;Install Minikube&lt;/strong&gt;&lt;br&gt;
For the installation commands, please head here: &lt;a href="https://minikube.sigs.k8s.io/docs/start/"&gt;https://minikube.sigs.k8s.io/docs/start/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;minikube is local Kubernetes, focusing on making it easy to learn and develop for Kubernetes.&lt;/p&gt;

&lt;p&gt;All you need is Docker (or similarly compatible) container or a Virtual Machine environment, and Kubernetes is a single command away: minikube start&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What you’ll need&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;2 CPUs or more&lt;/li&gt;
&lt;li&gt;2GB of free memory&lt;/li&gt;
&lt;li&gt;20GB of free disk space&lt;/li&gt;
&lt;li&gt;Internet connection&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Container or virtual machine manager, such as Docker, Hyperkit, Hyper-V, KVM, Parallels, Podman, VirtualBox, or VMware Fusion/Workstation&lt;/p&gt;

&lt;p&gt;Output -&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--BM-2saQG--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/k69y3k4ai6muqmurxnxl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--BM-2saQG--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/k69y3k4ai6muqmurxnxl.png" alt="output" width="700" height="329"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Verify minikube is health —&lt;/p&gt;

&lt;p&gt;&lt;code&gt;kubectl get ns&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Output —&lt;/p&gt;

&lt;p&gt;&lt;code&gt;NAME STATUS AGE&lt;br&gt;
default Active 31h&lt;br&gt;
kube-node-lease Active 31h&lt;br&gt;
kube-public Active 31h&lt;br&gt;
kube-system Active 31h&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;2.&lt;strong&gt;Install Memphis&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;code&gt;helm repo add memphis https://k8s.memphis.dev/charts/ &amp;amp;&amp;amp; helm install memphis memphis/memphis — set connectionToken=”memphis” — create-namespace — namespace memphis&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Let’s wait a minute or two, allowing the different components to reach “Running” state.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;kubectl get pods -n memphis&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Output -&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;NAME                           READY   STATUS             RESTARTS      AGE
k8s-busybox-68867bb9b7-sqdql   0/1     CrashLoopBackOff   4 (68s ago)   3m13s
memphis-broker-0               3/3     Running            4 (55s ago)   3m13s
memphis-ui-fd54f5bd6-zzqd4     0/1     CrashLoopBackOff   4 (79s ago)   3m13s
mongodb-replica-0              1/1     Running            0             3m13s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;NAME                           READY   STATUS             RESTARTS      AGE
k8s-busybox-68867bb9b7-sqdql   0/1     CrashLoopBackOff   4 (76s ago)   3m21s
memphis-broker-0               3/3     Running            4 (63s ago)   3m21s
memphis-ui-fd54f5bd6-zzqd4     1/1     Running            5 (87s ago)   3m21s
mongodb-replica-0              1/1     Running            0             3m21s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;k8s-busybox can be ignored&lt;/strong&gt;. It will be fixed in the coming versions&lt;/p&gt;

&lt;p&gt;3.&lt;strong&gt;Clone the Fastmart repo&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;code&gt;git clone https://github.com/yanivbh1/FastMart.git&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;4.&lt;strong&gt;System Architecture, Code, and Flow&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--NGrrqD8Y--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/j61f648l6qfyzj0ny362.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--NGrrqD8Y--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/j61f648l6qfyzj0ny362.png" alt="architecture" width="700" height="420"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Follow the numbers to understand the flow.&lt;/p&gt;

&lt;p&gt;FastMart has three main components:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;order-service&lt;/code&gt; - Exposes REST endpoints that allow clients to fetch the food menu, place an order and track the order in real-time.&lt;/p&gt;

&lt;p&gt;A new order will be saved in mongo with the status “Pending” and will be produced (Pushed) into the “orders” station&lt;/p&gt;

&lt;p&gt;&lt;code&gt;GET: /&amp;lt;orderId&amp;gt;&lt;br&gt;
Example: curl http://order-service:3000/30&lt;br&gt;
POST: /&amp;lt;order_details&amp;gt;&lt;br&gt;
Example: curl -X POST http://order-service:3000/api/orders -d ‘{“items”:[{“name”:”burger”,”quantity”:1}], “email”:”yaniv@memphis.dev”}’ -H ‘Content-Type: application/json’&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--UNiP3sdZ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/n13o4lw5bo620iofjxmq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--UNiP3sdZ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/n13o4lw5bo620iofjxmq.png" alt="example" width="660" height="408"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The code responsible for communicating with Memphis will be found on -&lt;/p&gt;

&lt;p&gt;&lt;code&gt;./order-service/src/services/mqService.js&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;code&gt;const memphis = require(“memphis-dev”);&lt;br&gt;
const { logger } = require(‘./loggerService’)&lt;br&gt;
const MEMPHIS_HOST = process.env.MEMPHIS_HOST || ‘localhost’; // create MQ connection string using environment variable&lt;br&gt;
const MEMPHIS_USERNAME = process.env.MEMPHIS_USERNAME;&lt;br&gt;
const MEMPHIS_TOKEN = process.env.MEMPHIS_TOKEN;&lt;br&gt;
let ordersStation_producer = null;&lt;br&gt;
const memphisConnect = async () =&amp;gt; {&lt;br&gt;
 try {&lt;br&gt;
 logger.info(&lt;/code&gt;Memphis — trying to connect&lt;code&gt;)&lt;br&gt;
 await memphis.connect({&lt;br&gt;
 host: MEMPHIS_HOST,&lt;br&gt;
 username: MEMPHIS_USERNAME,&lt;br&gt;
 connectionToken: MEMPHIS_TOKEN&lt;br&gt;
 });&lt;br&gt;
 logger.info(&lt;/code&gt;Memphis — connection established&lt;code&gt;)&lt;br&gt;
ordersStation_producer = await memphis.producer({&lt;br&gt;
 stationName: “orders”,&lt;br&gt;
 producerName: “order_service”,&lt;br&gt;
 });&lt;br&gt;
 logger.info(&lt;/code&gt;ordersStation_producer created&lt;code&gt;)&lt;br&gt;
 } catch(ex) {&lt;br&gt;
 logger.log(‘fatal’,&lt;/code&gt;Memphis — ${ex}`);&lt;br&gt;
 memphis.close();&lt;br&gt;
 process.exit();&lt;br&gt;
 }&lt;br&gt;
}&lt;br&gt;
/**&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Publish order to station&lt;/li&gt;
&lt;li&gt;
&lt;a class="mentioned-user" href="https://dev.to/param"&gt;@param&lt;/a&gt; {Object} order — order object containing order details
&lt;em&gt;/
const publishOrderToStation = (order) =&amp;gt; {
ordersStation_producer.produce({message: Buffer.from(JSON.stringify(order))});
logger.info(&lt;code&gt;Memphis — order ${order._id} placed&lt;/code&gt;);
}
/&lt;/em&gt;*&lt;/li&gt;
&lt;li&gt;An express middleware for injecting queue services into the request object.&lt;/li&gt;
&lt;li&gt;
&lt;a class="mentioned-user" href="https://dev.to/param"&gt;@param&lt;/a&gt; {Object} req — express request object.&lt;/li&gt;
&lt;li&gt;
&lt;a class="mentioned-user" href="https://dev.to/param"&gt;@param&lt;/a&gt; {Object} res — express response object.&lt;/li&gt;
&lt;li&gt;
&lt;a class="mentioned-user" href="https://dev.to/param"&gt;@param&lt;/a&gt; {Function} next — express next() function.
*/
const injectPublishService = (req, res, next) =&amp;gt; {
// add all exchange operations here
const stationServices = {
publishOrderToStation: publishOrderToStation
}
// inject exchangeServices in request object
req.stationServices = stationServices;
next();
}
module.exports = {
injectPublishService: injectPublishService,
memphisConnect: memphisConnect,
}`&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;code&gt;email-service&lt;/code&gt; - Responsible for notifying the client of the different stages.&lt;/p&gt;

&lt;p&gt;email-service consumer messages from two stations: &lt;code&gt;orders&lt;/code&gt; and &lt;code&gt;notifications&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;As soon as an order is inserted into the station, the email service notifies the client with an order confirmation.&lt;/p&gt;

&lt;p&gt;At the same time listens for new notification requests of other services&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--P_d1V5pG--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9foqsadabgwr58q16daj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--P_d1V5pG--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9foqsadabgwr58q16daj.png" alt="stations" width="548" height="286"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;code&gt;resturant-service&lt;/code&gt; - Responsible for fulfilling an order.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Consume an order&lt;/li&gt;
&lt;li&gt;Process the order&lt;/li&gt;
&lt;li&gt;Change order status at the MongoDB level to “Accepted”&lt;/li&gt;
&lt;li&gt;Using constant sleep time to mimic the preparation of the food by the restaurant&lt;/li&gt;
&lt;li&gt;Change order status at the MongoDB level to “Delivered”&lt;/li&gt;
&lt;li&gt;Sending notification to the client&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--mVbNnxts--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9886kqoqf7z8echp8u1c.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--mVbNnxts--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9886kqoqf7z8echp8u1c.png" alt="order" width="542" height="447"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;5.&lt;strong&gt;Deploy “Fastmart” over Kubernetes&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Fastmart repo tree -&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--oXaPoIMA--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/aujn0oezlcw0z1ufrvqg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--oXaPoIMA--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/aujn0oezlcw0z1ufrvqg.png" alt="Fastmart files tree" width="294" height="264"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To deploy Fastmart namespace and different services,&lt;/p&gt;

&lt;p&gt;please run &lt;code&gt;kubectl apply -f k8s-deployment.yaml&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;code&gt;kubectl get pods -n fastmart&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Output -&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;READY   STATUS             RESTARTS   AGE
email-service-5ddb9b58d6-bq2xd       0/1     CrashLoopBackOff   3          103s
fastmart-ui-5c9bc497bd-kn4lk         1/1     Running            0          11m
orders-service-5b689b66-4h8t9        0/1     CrashLoopBackOff   7          11m
resturant-service-6d97cf6fdc-c9mvs   0/1     Completed          4          103s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Let’s understand why Fastmart services cant start&lt;/p&gt;

&lt;p&gt;&lt;code&gt;kubectl logs email-service-5ddb9b58d6-bq2xd -n fastmart&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Output -&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;gt; email-service@1.0.0 start
&amp;gt; node ./index.js

17-05-2022 07:10:09 PM - info: Sleeping for 300ms before connecting to Memphis.
17-05-2022 07:10:09 PM - info: Memphis - trying to connect
17-05-2022 07:10:09 PM - info: email-service started
17-05-2022 07:10:09 PM - fatal: Memphis - User is not exist
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It appears that the services try to connect to “Memphis” with the user “fastmart” which does not exist and we require to create it.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The simplest way to add a new user would be through the UI&lt;/strong&gt;, but let’s do it via CLI.&lt;/p&gt;

&lt;p&gt;Please install Memphis CLI via here.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;$ mem&lt;br&gt;
Usage: index &amp;lt;command&amp;gt; [options]&lt;br&gt;
Options:&lt;br&gt;
 -V, — version output the version number&lt;br&gt;
 -h, — help display help for command&lt;br&gt;
Commands:&lt;br&gt;
 connect Connection to Memphis&lt;br&gt;
 factory Factories usage commands&lt;br&gt;
 station Stations usage commands&lt;br&gt;
 user Users usage commands&lt;br&gt;
 producer Producers usage commands&lt;br&gt;
 consumer Consumer usage commands&lt;br&gt;
 init Creates an example project for working with Memphis&lt;br&gt;
 help display help for command&lt;br&gt;
Factory is the place to bind stations that have some close business logic&lt;br&gt;
Factory Commands:&lt;br&gt;
 ls List of factories&lt;br&gt;
 create Create new factory&lt;br&gt;
 edit Edit factory name and/or description&lt;br&gt;
 del Delete a factory&lt;br&gt;
Station is Memphis’ queue/topic/channel/subject&lt;br&gt;
Station Commands:&lt;br&gt;
 ls List of stations&lt;br&gt;
 create Create new station&lt;br&gt;
 info Specific station’s info&lt;br&gt;
 del Delete a station&lt;br&gt;
Manage users and permissions&lt;br&gt;
User Commands:&lt;br&gt;
 ls List of users&lt;br&gt;
 add Add new user&lt;br&gt;
 del Delete user&lt;br&gt;
Producer is the entity who can send messages into stations&lt;br&gt;
Producer Commands:&lt;br&gt;
 ls List of Producers&lt;br&gt;
Consumer is the entity who can consume messages from stations&lt;br&gt;
Consumer Commands:&lt;br&gt;
 ls List of Consumers&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;To connect the CLI with Memphis control plane we need —&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;root password&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;code&gt;kubectl get secret memphis-creds -n memphis -o jsonpath=”{.data.ROOT_PASSWORD}” | base64 — decode&lt;br&gt;
OqEO9AbncKFF93r9Qd5V&lt;/code&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;memphis control-plane url&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;code&gt;kubectl port-forward service/memphis-cluster 7766:7766 6666:6666 5555:5555 — namespace memphis &amp;gt; /dev/null &amp;amp;&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Now, connect the CLI&lt;/p&gt;

&lt;p&gt;&lt;code&gt;mem connect — user root — password bpdASQlhwWNzFt4JwLQo — server localhost:5555&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--yQeFpUe3--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/k4hvvuzphch7en6rddjk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--yQeFpUe3--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/k4hvvuzphch7en6rddjk.png" alt="cli" width="700" height="150"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Add the user “fastmart”&lt;/p&gt;

&lt;p&gt;&lt;code&gt;mem user add -u fastmart — type application&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Or via the UI&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--2FwZ528z--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/z5nhnfcqw2twe2qi8v07.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--2FwZ528z--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/z5nhnfcqw2twe2qi8v07.png" alt="ui" width="700" height="386"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--RMKoXBTx--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/tojpd6nsr1q3cqx0xi18.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--RMKoXBTx--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/tojpd6nsr1q3cqx0xi18.png" alt="details" width="700" height="509"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;**Soon after we will create the user,&lt;/p&gt;

&lt;p&gt;the pods will restart automatically and reconnect with Memphis.**&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--8LtGpik_--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/nfod2y9aiks4z0mizoa0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--8LtGpik_--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/nfod2y9aiks4z0mizoa0.png" alt="List of factories encapsulate stations" width="700" height="386"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--z9Ca50Bh--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hppff45vi4ln64rjnbst.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--z9Ca50Bh--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hppff45vi4ln64rjnbst.png" alt="Station overview" width="700" height="386"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--3l1dQu3Z--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/p4sdkznru9pwb3414990.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--3l1dQu3Z--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/p4sdkznru9pwb3414990.png" alt="overview" width="700" height="386"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;6.&lt;strong&gt;Order food!&lt;/strong&gt;&lt;br&gt;
To expose the &lt;code&gt;orders-service&lt;/code&gt; through localhost, run -&lt;/p&gt;

&lt;p&gt;&lt;code&gt;kubectl port-forward service/orders 9001:80 — namespace fastmart &amp;gt; /dev/null &amp;amp;&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Get the menu&lt;/p&gt;

&lt;p&gt;&lt;code&gt;curl localhost:9001/api/menu&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Output -&lt;/p&gt;

&lt;p&gt;&lt;code&gt;{“items”:[{“name”:”burger”,”price”:50},{“name”:”fries”,”price”:20},{“name”:”coke”,”price”:10}]}&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Make an order&lt;/p&gt;

&lt;p&gt;&lt;code&gt;curl -X POST localhost:9001/api/orders -d ‘{“items”:[{“name”:”burger”,”quantity”:1}], “email”:”test@gmail.com”}’ -H ‘Content-Type: application/json’&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;An email should arrive shortly at the email address specified above.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Thanks!&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--JPSMQ2Qx--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ne1szuza5953tahbjihz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--JPSMQ2Qx--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ne1szuza5953tahbjihz.png" alt="app" width="600" height="600"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>node</category>
      <category>mongodb</category>
      <category>kubernetes</category>
      <category>devops</category>
    </item>
  </channel>
</rss>
