<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: suyash200</title>
    <description>The latest articles on Forem by suyash200 (@jsnomad).</description>
    <link>https://forem.com/jsnomad</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/jsnomad"/>
    <language>en</language>
    <item>
      <title>Decoding Kafka Part 2</title>
      <dc:creator>suyash200</dc:creator>
      <pubDate>Sat, 06 Dec 2025 16:05:45 +0000</pubDate>
      <link>https://forem.com/jsnomad/decoding-kafka-part-2-38ij</link>
      <guid>https://forem.com/jsnomad/decoding-kafka-part-2-38ij</guid>
      <description>&lt;h2&gt;
  
  
  Hands-On &amp;amp; Under the Hood
&lt;/h2&gt;

&lt;p&gt;In &lt;strong&gt;Part 1&lt;/strong&gt;, we established that Kafka is the high-speed highway for data, handling real-time streams with high throughput. We covered the basic anatomy: Brokers, Topics, and Partitions.&lt;/p&gt;

&lt;p&gt;Now, it’s time to get our hands dirty. In this part, we will spin up a local Kafka cluster, write code to produce and consume events, and—crucially—dive deeper into how Consumers actually track their progress using &lt;strong&gt;Offsets&lt;/strong&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  1. The Setup: Kafka via Docker Compose
&lt;/h2&gt;

&lt;p&gt;Setting up Kafka manually can be complex (ZooKeeper, multiple brokers, etc.). To keep things clean, we’ll use &lt;strong&gt;Docker Compose&lt;/strong&gt;. This allows us to spin up a Broker and a UI tool in a single command.&lt;/p&gt;

&lt;p&gt;We will use the &lt;code&gt;apache/kafka&lt;/code&gt; image, &lt;code&gt;provectus/kafka-ui&lt;/code&gt; for monitoring and &lt;code&gt;node.js&lt;/code&gt; runtime and &lt;code&gt;kafkajs&lt;/code&gt; driver.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Download the docker-compose file on your system&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/suyash200/learning-kafka/blob/5f0391172e7f5c1c8c41439139112b567a0c7a4b/docker-compose.yaml" rel="noopener noreferrer"&gt;docker-compose.yaml&lt;/a&gt;  &lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Run the docker compose command&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;code&gt;docker compose up -d&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Thats it we've a running kafka broker, ready to churn out events 🙌&lt;/p&gt;

&lt;h2&gt;
  
  
  2. Managing Topics (The Admin API)
&lt;/h2&gt;

&lt;p&gt;Before we send data, we need a destination. While you &lt;em&gt;can&lt;/em&gt; let Kafka auto-create topics, defining them via the &lt;strong&gt;Admin API&lt;/strong&gt; gives you control over two critical factors:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; &lt;strong&gt;Partitions:&lt;/strong&gt; How much parallelism do you need?&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;Replication Factor:&lt;/strong&gt; How many copies of the data do you want for fault tolerance?
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
import { Kafka } from "kafkajs";

const kafka = new Kafka({
  clientId: "my-app",
  brokers: ["localhost:9094"],
});

const createTopicIfNotExists = async () =&amp;gt; {
  const admin = kafka.admin();
  const topicName = "Test-topic";

  try {
    await admin.connect();

    // 1. Get the list of existing topics
    const topics = await admin.listTopics();
    console.log("Existing topics:", topics);

    // 2. Check if the specific topic exists
    if (!topics.includes(topicName)) {
      console.log(`Topic "${topicName}" not found. Creating...`);

      // 3. Create the topic
      await admin.createTopics({
        topics: [
          {
            topic: topicName,
            numPartitions: 1,     // Adjust based on your needs
            replicationFactor: 1, // Adjust based on your broker count
          },
        ],
      });
      console.log(`Topic "${topicName}" created successfully.`);
    } else {
      console.log(`Topic "${topicName}" already exists.`);
    }

  } catch (error) {
    console.error("Error in admin operation:", error);
  } finally {
    // 4. Always disconnect
    await admin.disconnect();
  }
};

createTopicIfNotExists();

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  3. The Producer: Keys vs. No Keys
&lt;/h2&gt;

&lt;p&gt;Now, let's write some events. In Kafka, &lt;em&gt;how&lt;/em&gt; you send the message determines &lt;em&gt;where&lt;/em&gt; it lands.&lt;/p&gt;

&lt;h3&gt;
  
  
  Scenario A: Sending without a Key
&lt;/h3&gt;

&lt;p&gt;If you send a message with &lt;code&gt;key=null&lt;/code&gt;, the producer creates a "Round Robin" effect. It distributes messages evenly across all available partitions. This is great for load balancing but implies no guarantee of order relative to other messages.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import { Kafka } from "kafkajs";

const kafka = new Kafka({
  clientId: "my-app",
  brokers: ["localhost:9094"],
});

async function produceWithoutKey() {
  await producer.connect();
  await producer.send({
    topic: "Test-topic",
    messages: [
      {
        value: JSON.stringify({
          message: "Hello KafkaJS user!",
        }),
      },
    ],
  });

  await producer.disconnect();
}

produceWithoutKey(); // this will produce messages without a key

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Scenario B: Sending with a Key
&lt;/h3&gt;

&lt;p&gt;If you provide a Key (e.g., &lt;code&gt;user_id&lt;/code&gt; or &lt;code&gt;transaction_id&lt;/code&gt;), Kafka guarantees that &lt;strong&gt;all messages with the same key go to the same partition.&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import { Kafka } from "kafkajs";

const kafka = new Kafka({
  clientId: "my-app",
  brokers: ["localhost:9094"],
});

const producer = kafka.producer();
async function produceWithKey() {
  await producer.connect();
  await producer.send({
    topic: "Test-topic",
    messages: [
      {
        key: Math.random().toString(36).substring(2, 15),
        value: JSON.stringify({
          message: "Hello KafkaJS user!",
        }),
      },
    ],
  });

  await producer.disconnect();
}
produceWithKey(); // this will produce messages with a key

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Why does this matter?&lt;/strong&gt; If you are processing payment updates for &lt;code&gt;User A&lt;/code&gt;, you need "Payment Initiated" to arrive before "Payment Completed." Sending both with &lt;code&gt;key=User A&lt;/code&gt; ensures they land in the same partition and are read in order.&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;TIP 💡&lt;/strong&gt;: Check out the script &lt;a href="https://github.com/suyash200/learning-kafka/blob/53999ab33e6e464dc7cac9f294613a1743092ba2/multiple-events.zsh" rel="noopener noreferrer"&gt;multiple-events.zsh&lt;/a&gt; —it generates events repeatedly to simulate high-volume data, giving you a feel for the scale Kafka is designed to handle. &lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  4. The Consumer
&lt;/h2&gt;

&lt;p&gt;Consuming messages is where the logic gets interesting. This isn't just about reading data; it's about tracking &lt;em&gt;state&lt;/em&gt;. &lt;strong&gt;GroupId&lt;/strong&gt; is an important parameter for consumer groups it helps sharing events in parallel, rebalancing manage consumption and much more. Typically, consumers run in a separate microservice so they can process events independently from the producer or other services.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import { Kafka } from "kafkajs";

const kafka = new Kafka({
  clientId: "my-app",
  brokers: ["localhost:9094"],
});

const consumer = kafka.consumer({ groupId: "Test-group" });

await consumer.connect();
await consumer.subscribe({ topic: "Test-topic" });



await consumer.run({
  eachMessage: async ({ topic, partition, message }) =&amp;gt; {
    // processing as per need
    console.log({
      partition,
      offset: message.offset,
      value: JSON.parse(message.value.toString()),
      message: message,
      topic,
    });
  },
});
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Understanding &lt;strong&gt;Offsets&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;As a consumer reads from a partition, it needs to keep track of its place. We call this the &lt;strong&gt;Offset&lt;/strong&gt;.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;TIP 💡:&lt;/strong&gt; Consider the Offset as a &lt;strong&gt;bookmark&lt;/strong&gt; while reading a book. It tells you exactly where you stopped reading so you can pick up from there next time (or after a crash).&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;There are three specific types of offsets you should know:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Log-End Offset:&lt;/strong&gt; The offset of the very last message written to the partition (the newest data).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;High-Watermark Offset:&lt;/strong&gt; The point up to which all consumers are safe to read.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Committed Offset:&lt;/strong&gt; The last offset the consumer successfully processed and reported done.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Consumer Lag
&lt;/h3&gt;

&lt;p&gt;One of the most important metrics to watch is &lt;strong&gt;Consumer Lag&lt;/strong&gt;. This is essentially the distance between the writer (Producer) and the reader (Consumer).&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Consumer Lag = LogEndOffset - CommittedOffset&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;If your lag is high, your consumer is falling behind the producer 🚨&lt;/p&gt;

&lt;h3&gt;
  
  
  The Consumer Group Logic (Scaling)
&lt;/h3&gt;

&lt;p&gt;When you start a consumer, you usually assign it to a &lt;strong&gt;Group&lt;/strong&gt;. Kafka automatically balances the partitions among the consumers in that group.&lt;/p&gt;

&lt;p&gt;Here is the relationship between Consumers n(C) and Partitions n(P):&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;n(C) &amp;lt; n(P):&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Result:&lt;/strong&gt; Some consumers will read from multiple partitions.&lt;/li&gt;
&lt;li&gt;
&lt;em&gt;Status:&lt;/em&gt; Heavy load on individual consumers.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;n(C) = n(P):&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Result:&lt;/strong&gt; The ideal state. Each consumer handles exactly one partition.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Status:&lt;/strong&gt; Balanced.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;n(C) &amp;gt; n(P):&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Result:&lt;/strong&gt; Since a partition cannot be split, the extra consumers will have no work.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Status:&lt;/strong&gt; Idle. (Useful only as failover backups).&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;




&lt;h2&gt;
  
  
  5. Monitoring with Kafka UI
&lt;/h2&gt;

&lt;p&gt;Finally, we can visualize everything we just built. By opening &lt;code&gt;localhost:9012&lt;/code&gt; (or your configured port), we can see the Kafka UI.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What to look for:&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; &lt;strong&gt;Topics:&lt;/strong&gt; Verify your topic exists with the correct partition count.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F36awgznkfbvj0v5q5w5m.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F36awgznkfbvj0v5q5w5m.png" alt="Topics screen on kafka-ui" width="800" height="202"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; &lt;strong&gt;Messages :&lt;/strong&gt; Here we can see each message with its partition assigned and offset, how partitioned are assigned is out of scope for now, we'll pick it up in future. &lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1xb95ujbrvlaeiawdyrd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1xb95ujbrvlaeiawdyrd.png" alt="Message queue in kafka-ui" width="800" height="431"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; &lt;strong&gt;Consumers:&lt;/strong&gt; Check your &lt;strong&gt;Consumer Lag&lt;/strong&gt;. If you see the lag growing, it means your consumer script can't keep up with the producer!&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx0mxyn1ugd44j4v1dvlz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx0mxyn1ugd44j4v1dvlz.png" alt="Consumer List for kafka topic on kafka-ui" width="800" height="355"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Thats it for today, we'll see in-depth application by using it for some projects in the next part of the ongoing series.&lt;/p&gt;

&lt;p&gt;See ya 👋, keep learning !&lt;/p&gt;

&lt;p&gt;Follow me &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;X (Twitter): &lt;a href="https://x.com/SuyashLade" rel="noopener noreferrer"&gt;https://x.com/SuyashLade&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;LinkedIn: &lt;a href="https://www.linkedin.com/in/suyash-lade/" rel="noopener noreferrer"&gt;https://www.linkedin.com/in/suyash-lade/&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Appendix &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://stackoverflow.com/questions/38024514/understanding-kafka-topics-and-partitions" rel="noopener noreferrer"&gt;Kafka-topic-partitions&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.redpanda.com/guides/kafka-architecture-kafka-offset" rel="noopener noreferrer"&gt;kafka-offsets&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.confluent.io/kafka/design/replication.html" rel="noopener noreferrer"&gt;kafka-replication-guide&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/suyash200/learning-kafka.git" rel="noopener noreferrer"&gt;github-repo&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>programming</category>
      <category>javascript</category>
      <category>backend</category>
      <category>microservices</category>
    </item>
    <item>
      <title>Decoding Kafka: A Streaming Powerhouse?</title>
      <dc:creator>suyash200</dc:creator>
      <pubDate>Sun, 24 Nov 2024 09:23:52 +0000</pubDate>
      <link>https://forem.com/jsnomad/decoding-kafka-a-streaming-powerhouse-4cag</link>
      <guid>https://forem.com/jsnomad/decoding-kafka-a-streaming-powerhouse-4cag</guid>
      <description>&lt;h2&gt;
  
  
  What is Kafka exactly?
&lt;/h2&gt;

&lt;p&gt;Imagine a high-speed highway for data. That's essentially what Kafka is! It's a powerful tool for handling real-time data streams in large-scale systems. Think of it as a system that captures continuous data flow from various sources like applications, user interfaces, and servers, and stores it for further analysis and processing.&lt;/p&gt;

&lt;h2&gt;
  
  
  When to use Kafka?
&lt;/h2&gt;

&lt;p&gt;Kafka is build for large scale systems that requires a high throughput. It's ideal for situations where real-time data transfer is crucial, such as managing payment transactions, Internet of Things (IoT) data flow, and monitoring systems. It's a popular choice in data platforms, event-driven architectures, and micro-services environments.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Example&lt;/strong&gt;: Lets take an payment service where after successful payment we need to generate invoice send email, add database entry all at the same time. Along with subscription event where we have to capture analytics and enable users Kafka can help in streamlining this entire setup at scale and in real-time&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffih2z4pe1c3gekc0gwes.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffih2z4pe1c3gekc0gwes.png" alt="Kafka setup for payment flows" width="800" height="362"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Kafka Working
&lt;/h2&gt;

&lt;p&gt;Kafka works on &lt;a href="https://kafka.apache.org/protocol.html" rel="noopener noreferrer"&gt;Kafka-protocol&lt;/a&gt; which is based on TCP protocol. It uses distributed systems &lt;strong&gt;server&lt;/strong&gt; and &lt;strong&gt;client&lt;/strong&gt; to process data. &lt;/p&gt;

&lt;h3&gt;
  
  
  Kafka-protocol
&lt;/h3&gt;

&lt;p&gt;Kafka employs a binary protocol over TCP for efficient communication between clients and servers. This binary protocol is designed to minimize overhead and maximize performance. Unlike traditional protocols that require a handshake for every connection, Kafka's protocol establishes a long-lasting connection, reducing the overhead of repeated handshakes.&lt;/p&gt;

&lt;h3&gt;
  
  
  Kafka-server
&lt;/h3&gt;

&lt;p&gt;Kafka-server is cluster of one or more brokers that handles distributed processing of events across various data-centers.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Brokers are individual Kafka servers responsible for orchestrating the event streaming.&lt;/li&gt;
&lt;li&gt;A Kafka cluster is highly scalability and fault-tolerant, if any of its servers fails, the other servers will take over their work to ensure continuous operations without any data loss.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Kafka-client
&lt;/h3&gt;

&lt;p&gt;Kafka clients are applications that interact with Kafka clusters to produce and consume events. Kafka-has many drivers available across various languages for Node.js, Python, C, C++, Java.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Producers generate and send events to specific Kafka topics. They can operate asynchronously for high throughput or synchronously for guaranteed delivery.&lt;/li&gt;
&lt;li&gt;Consumers subscribe to topics of interest and actively poll the brokers for new events. They can be part of consumer groups, allowing multiple consumers to collaborate and share the load of processing events from a single topic.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2ww6uxm1r8lzod5ut9sv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2ww6uxm1r8lzod5ut9sv.png" alt="Kafka setup with producer broker and consumer polling" width="590" height="303"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Kafka-Terminologies
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Event
&lt;/h3&gt;

&lt;p&gt;We all are talking about event for quite some time, it simply is an action that has happened in the application. Events belong to a topic, it also has data(message) with it to help in further processing.Unlike traditional messaging systems, events are not deleted after consumption. We can define how long do we store an event in the Kafka broker or cluster.&lt;/p&gt;

&lt;h3&gt;
  
  
  Topic
&lt;/h3&gt;

&lt;p&gt;Topic are the identification for an event. Similar to files in folder system topic helps organizing events to specific topic. Topics in Kafka are always multi-producer and multi-subscriber.&lt;/p&gt;

&lt;h3&gt;
  
  
  Partition
&lt;/h3&gt;

&lt;p&gt;Imagine a topic as a book with multiple chapters (partitions). Each chapter (partition) can hold a sequence of events. When you produce an event, you essentially decide which chapter (partition) it belongs to based on its key.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;For each partition, one broker is designated as the leader, while others are followers. The leader handles reads and writes for the partition, while followers replicate data from the leader. If the leader fails, one of the followers is promoted to become the new leader.&lt;/li&gt;
&lt;li&gt;Kafka brokers maintain metadata about the cluster's state, including the topics, partitions, and their respective leaders. Clients fetch this metadata to determine which broker to send requests to.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F15pgqu8br8z9afh9l95z.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F15pgqu8br8z9afh9l95z.png" alt="Example of topic partition, event being replicated on multiple brokers" width="800" height="414"&gt;&lt;/a&gt;&lt;br&gt;
Example of topic partition,entire event is assigned to a specific partition based on a partitioning key &lt;a href="https://kafka.apache.org/protocol.html#protocol_partitioning" rel="noopener noreferrer"&gt;docs&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Kafka-APIs
&lt;/h3&gt;

&lt;p&gt;All these API's are provided by client side packages to interact with Kafka broker/cluster servers&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The &lt;strong&gt;Admin API&lt;/strong&gt; to manage and inspect topics, brokers, and other Kafka objects.&lt;/li&gt;
&lt;li&gt;The &lt;strong&gt;Producer API&lt;/strong&gt; to publish (write) a stream of events to one or more Kafka topics.&lt;/li&gt;
&lt;li&gt;The &lt;strong&gt;Consumer API&lt;/strong&gt; to subscribe to (read) one or more topics and to process the stream of events produced to them.&lt;/li&gt;
&lt;li&gt;The &lt;strong&gt;Kafka Streams API&lt;/strong&gt; to implement stream processing applications and micro-services. Helpful in doing aggregation and pre-processing some data like joints, windowing.&lt;/li&gt;
&lt;li&gt;The &lt;strong&gt;Kafka Connect API&lt;/strong&gt; to build and run reusable data import/export connectors that consume (read) or produce (write) streams of events from and to external systems and applications so they can integrate with Kafka&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;More info on installing Kafka can be found &lt;a href="https://kafka.apache.org/documentation/#quickstart" rel="noopener noreferrer"&gt;here&lt;/a&gt;&lt;br&gt;
Hope this give you basic idea of Kafka its working and its need. Will keep updating this series learning Kafka.&lt;/p&gt;

</description>
      <category>eventdriven</category>
      <category>systemdesign</category>
      <category>architecture</category>
    </item>
    <item>
      <title>Setting Up Your Next Project</title>
      <dc:creator>suyash200</dc:creator>
      <pubDate>Sun, 25 Jun 2023 16:30:29 +0000</pubDate>
      <link>https://forem.com/jsnomad/setting-up-your-project-3382</link>
      <guid>https://forem.com/jsnomad/setting-up-your-project-3382</guid>
      <description>&lt;p&gt;Looking for a guide to start your project setup? You found a right place. This blog will give you an basic idea for setting up your project and this series will help you as a beginner. I am using next but this can apply for any framework as well as vanilla.js&lt;/p&gt;

&lt;h3&gt;
  
  
  Prerequisite
&lt;/h3&gt;

&lt;p&gt;-node.js&lt;br&gt;
-npm&lt;br&gt;
-HTML&lt;br&gt;
-CSS&lt;/p&gt;

&lt;p&gt;Lets start ...&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Creating next project
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;npx create-next-app app-name
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For  beginners  we can use  ESlint in the options .&lt;/p&gt;

&lt;h3&gt;
  
  
  Project structure
&lt;/h3&gt;

&lt;p&gt;lets understand project structure of the current project and update it to  be more suitable structure.&lt;br&gt;
   How it looks now :&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3il6ztwjknu3jcepo2g1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3il6ztwjknu3jcepo2g1.png" alt=" " width="331" height="465"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Following steps would be followed:&lt;/p&gt;

&lt;h4&gt;
  
  
  1.  Removing the api folder from pages
&lt;/h4&gt;

&lt;h4&gt;
  
  
  2. Create component folder
&lt;/h4&gt;

&lt;h4&gt;
  
  
  3. Create an api folder in rood directory
&lt;/h4&gt;

&lt;h4&gt;
  
  
  4. Create an interceptor.js file and Endpoints folder
&lt;/h4&gt;

&lt;p&gt;Now it looks like this&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo2fnbehqhvs4sszzn40o.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo2fnbehqhvs4sszzn40o.png" alt=" " width="298" height="552"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Working on CSS
&lt;/h3&gt;

&lt;p&gt;lets clear the entire global CSS ...&lt;br&gt;
  Now starting fresh&lt;/p&gt;

&lt;h4&gt;
  
  
  CSS reset
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  *{
    margin:0;
    padding:0;
    box-sizing: border-box;
   }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This reset the CSS padding  and removes unnecessary padding, margin  that is automatically added to your HTML components.&lt;/p&gt;

&lt;h4&gt;
  
  
  Root Selector
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;:root{
   --white:#fff,
   --black:#000,

}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The :root selector matches the document's root element. We can&lt;br&gt;
     declare our variable here , basically we can describe  font &lt;br&gt;
      color that are repetitively used in the web app.&lt;/p&gt;

&lt;h4&gt;
  
  
  Body
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
body {
  background: var(--cream);
  font-family: "Playfair Display", "sans-serif";

}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The css declarations in this part  are applied through out the the web app ,we can provide with background color for out website&lt;br&gt;
font-size, font-family etc can be described here.&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>beginners</category>
      <category>css</category>
      <category>frontend</category>
    </item>
    <item>
      <title>Understanding Basics of Node.js</title>
      <dc:creator>suyash200</dc:creator>
      <pubDate>Mon, 03 Apr 2023 13:02:45 +0000</pubDate>
      <link>https://forem.com/jsnomad/understanding-basics-of-nodejs-3e80</link>
      <guid>https://forem.com/jsnomad/understanding-basics-of-nodejs-3e80</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;Node.js is a javascript runtime environment built on &lt;a href="https://v8.dev/" rel="noopener noreferrer"&gt;V8 engine&lt;/a&gt;. It is open-source, single threaded and is used to make &lt;strong&gt;server-side&lt;/strong&gt; applications. &lt;/p&gt;

&lt;p&gt;Lets understand by visualization how node.js works&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8tn2j5eqkv7ogvvyymzh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8tn2j5eqkv7ogvvyymzh.png" alt="An visual overview of how node.js works" width="800" height="450"&gt;&lt;/a&gt; &lt;/p&gt;
&lt;h2&gt;
  
  
  Modules in Node
&lt;/h2&gt;

&lt;p&gt;Core feature of node is modules. Modules are simple or complex&lt;br&gt;
task that are pre-build in them. Each module in Node.js has its own context, so it cannot interfere with other modules or pollute global scope. It is based on common &lt;a href="https://requirejs.org/docs/commonjs.html" rel="noopener noreferrer"&gt;Javascript modules standard&lt;/a&gt;&lt;br&gt;
Modules are of three types:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Core&lt;/strong&gt;: In built modules that are pre installed with node.js&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Local&lt;/strong&gt;: Local modules are modules created locally in your Node.js application.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Third-Party&lt;/strong&gt;: External packages that are installed via package managers like  &lt;a href="https://www.npmjs.com/" rel="noopener noreferrer"&gt;npm&lt;/a&gt; or &lt;a href="https://yarnpkg.com/" rel="noopener noreferrer"&gt;yarn&lt;/a&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  Node package Manager(NPM)
&lt;/h2&gt;

&lt;p&gt;"There is a npm package for that."&lt;br&gt;
 This quote sums up importance of npm. It allows third party packages to be installed to your project. Anyone can upload their&lt;br&gt;
 packages to website and allow others to use.&lt;/p&gt;
&lt;h1&gt;
  
  
  Example
&lt;/h1&gt;

&lt;p&gt;Now that we  take a look at the basics of node.js lets write a program that covers http module in node used for backend development.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;var http = require('http');

http.createServer(function (req, res) {
  res.write('Hello World!');  
  res.end();  
}).listen(8080); 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In above code at line 1 http module is imported as it is a Core module in node.js.&lt;br&gt;
createserver is used to create a  server&lt;br&gt;
res,req represents request and response of the server&lt;br&gt;
.listen() represents on which port the requests are going to be heard.&lt;br&gt;
The output of the following  server would be on port 8000:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4mzooinjsqh03m4tbzp7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4mzooinjsqh03m4tbzp7.png" alt="Output of the above server" width="800" height="176"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This has been basics of node.js. More documentation can always be found on &lt;a href="https://nodejs.org/en/docs/" rel="noopener noreferrer"&gt;node docs&lt;/a&gt;.Thank you for reading&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>javascript</category>
      <category>node</category>
    </item>
  </channel>
</rss>
