<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Peter Kraft</title>
    <description>The latest articles on Forem by Peter Kraft (@kraftp).</description>
    <link>https://forem.com/kraftp</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/kraftp"/>
    <language>en</language>
    <item>
      <title>How To Process Events Exactly-Once</title>
      <dc:creator>Peter Kraft</dc:creator>
      <pubDate>Sun, 06 Oct 2024 21:16:08 +0000</pubDate>
      <link>https://forem.com/dbos/how-to-process-events-exactly-once-4dpo</link>
      <guid>https://forem.com/dbos/how-to-process-events-exactly-once-4dpo</guid>
      <description>&lt;p&gt;Want to process incoming events &lt;strong&gt;exactly-once&lt;/strong&gt;?&lt;/p&gt;

&lt;p&gt;Well, any distributed systems pedant will say you can't, because it's &lt;strong&gt;theoretically impossible&lt;/strong&gt;. And technically, they're right: if you send a message and don't get an answer, you have no way of knowing whether the receiver is offline or just slow, so eventually you have no choice but to send the message again if you want it processed.&lt;/p&gt;

&lt;p&gt;So if exactly-once processing is impossible, why do many systems, including DBOS, claim to provide it?  &lt;/p&gt;

&lt;p&gt;The trick is to leverage another property: &lt;strong&gt;idempotence&lt;/strong&gt;. If you design a message receiver to be idempotent, then you can deliver a message to it multiple times and it will be fine because the duplicate deliveries have no effect. Thus, the combination of at-least-once delivery and idempotence is identical to exactly-once semantics in practice.&lt;/p&gt;

&lt;p&gt;Under the hood, this is exactly how DBOS event receivers (like for Kafka) work. They generate a unique key from an event (for example, from a Kafka topic + partition + offset) and use it as an idempotency key for the event processing workflow. That way, even if an event is delivered multiple times, the workflow only processes it once.&lt;/p&gt;

&lt;p&gt;Here's all the code you need to process Kafka messages exactly-once:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;dbos&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;DBOS&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;KafkaMessage&lt;/span&gt;

&lt;span class="nd"&gt;@DBOS.kafka_consumer&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;config&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;topic&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt;
&lt;span class="nd"&gt;@DBOS.workflow&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;test_kafka_workflow&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;msg&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;KafkaMessage&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="n"&gt;DBOS&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;logger&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;info&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Message received: &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;msg&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;value&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;decode&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Learn more &lt;a href="https://docs.dbos.dev/python/tutorials/kafka-integration" rel="noopener noreferrer"&gt;here&lt;/a&gt;!&lt;/p&gt;

</description>
      <category>python</category>
      <category>kafka</category>
      <category>database</category>
      <category>cloud</category>
    </item>
    <item>
      <title>Making Serverless 15x Cheaper</title>
      <dc:creator>Peter Kraft</dc:creator>
      <pubDate>Thu, 13 Jun 2024 00:41:35 +0000</pubDate>
      <link>https://forem.com/dbos/making-serverless-15x-cheaper-751</link>
      <guid>https://forem.com/dbos/making-serverless-15x-cheaper-751</guid>
      <description>&lt;p&gt;We’ve often heard developers working on stateful applications say that they want to adopt serverless technology to more easily deploy to the cloud, but can’t because it’s prohibitively expensive. For clarity, by “stateful applications” we mean applications that manage persistent state stored in a database (for example, a web app built on Postgres).&lt;/p&gt;

&lt;p&gt;In this blog post, we'll show how to make serverless 15x cheaper for stateful applications. First, we'll explain why it's too expensive right now. Then, we'll show how we architected the DBOS Cloud serverless platform to make serverless affordable for stateful applications.&lt;br&gt;
‍&lt;/p&gt;

&lt;h2&gt;
  
  
  Improving Serverless Efficiency
&lt;/h2&gt;

&lt;p&gt;The first strategy DBOS Cloud uses to make serverless efficient is sharing executors across requests to improve resource utilization. The key idea is that stateful applications are typically I/O-bound: they spend most of their time interacting with remote data stores and external services and rarely do complex computation locally. &lt;/p&gt;

&lt;p&gt;Consider an e-commerce application that shows you the status of your latest order. When you send it a request, it calls out to a database to look up what your last order was, then waits for a response, then calls out again to look up the status of that order, then waits for a response, then finally returns the status back to you.  Here’s what that might look like:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frt14w9o5iykxpre0dwg6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frt14w9o5iykxpre0dwg6.png" alt="Waiting for a response" width="800" height="52"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Current serverless platforms execute such applications inefficiently because they launch &lt;a href="https://docs.aws.amazon.com/lambda/latest/dg/lambda-concurrency.html" rel="noopener noreferrer"&gt;every request into a separate execution environment&lt;/a&gt;. Each execution environment sends a request to the database, then is blocked doing nothing useful until it receives the database’s response. Here’s what it looks if two people concurrently look up their latest orders in AWS Lambda:&lt;/p&gt;

&lt;p&gt;‍&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.prod.website-files.com%2F656ffe813302aab28fca115e%2F662b25da427cf2e764288334_DBOS-versus-AWS-Lambda-3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.prod.website-files.com%2F656ffe813302aab28fca115e%2F662b25da427cf2e764288334_DBOS-versus-AWS-Lambda-3.png" alt="Waiting for a response in Lambda" width="800" height="100"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For stateful applications, current serverless platforms spend most of their time doing nothing, then charge you for the idle time. &lt;/p&gt;

&lt;p&gt;By contrast, DBOS Cloud multiplexes concurrent requests to the same execution environment, using the time one request spends waiting for a result from the database to serve another request. To make sure execution environments are never overloaded, DBOS Cloud continuously monitors their utilization and autoscales when needed, using &lt;a href="https://firecracker-microvm.github.io/" rel="noopener noreferrer"&gt;Firecracker&lt;/a&gt; to rapidly instantiate new execution environments. Here’s what it looks if multiple people concurrently look up their latest orders in DBOS:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsruase3436av9bhj4xt8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsruase3436av9bhj4xt8.png" alt="Waiting for concurrent responses in Lambda" width="800" height="71"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  DBOS Includes Reliable Workflow Execution
&lt;/h3&gt;

&lt;p&gt;The second strategy DBOS Cloud uses is leveraging &lt;a href="https://github.com/dbos-inc/dbos-transact" rel="noopener noreferrer"&gt;DBOS Transact’s&lt;/a&gt; reliable workflows to efficiently orchestrate multiple functions in a single application. Many applications consist of multiple discrete steps that all need to complete for the application to succeed; for example, the checkout flow for an online store might reserve inventory, then process payment, then ship the order, then send a confirmation email:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F31oxikxelu5j9ggnmb12.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F31oxikxelu5j9ggnmb12.png" alt="A simple workflow" width="800" height="58"&gt;&lt;/a&gt;&lt;br&gt;
‍&lt;br&gt;
Conventional serverless platforms require an expensive external orchestrator like &lt;a href="https://aws.amazon.com/step-functions/" rel="noopener noreferrer"&gt;AWS Step Functions&lt;/a&gt; to coordinate these steps, executing each in sequence and retrying them when they fail. By contrast, DBOS Cloud uses the reliable workflows built into open-source DBOS Transact to guarantee transactional execution for an application at no additional cost.&lt;/p&gt;

&lt;h2&gt;
  
  
  DBOS Cloud vs. AWS Lambda Cost Comparison
&lt;/h2&gt;

&lt;p&gt;Now that we’ve explained how DBOS Cloud achieves its cost efficiency, let’s measure it by comparing the cost to run a stateful application workload on DBOS Cloud versus AWS Lambda. We’re referencing AWS Lambda because it’s the most popular serverless platform, but the numbers are comparable for other platforms like Azure Functions or Google Cloud Functions. &lt;/p&gt;

&lt;p&gt;Consider an application workflow with four steps. To keep the math simple, let’s say each step takes 10 ms and the application is invoked 250M times a month (~100 application invocations/second). Since the workflow takes 40 ms total, this works out to 10M execution seconds per month. Assuming a 512MB executor is used for both DBOS Cloud and Lambda, here’s how the cost compares:&lt;/p&gt;

&lt;h3&gt;
  
  
  DBOS Cloud Cost
&lt;/h3&gt;

&lt;p&gt;We’ll assume that in DBOS, the application is implemented as four operations in a &lt;a href="https://github.com/dbos-inc/dbos-transact" rel="noopener noreferrer"&gt;DBOS Transact&lt;/a&gt; reliable workflow. Stateful, reliable workflow execution is built into DBOS, so there is no need for a separate orchestrator like AWS Step Functions. The 10M execution seconds per month falls within the $40 per-month DBOS Cloud Pro pricing tier, so the total cost is $40 per month.&lt;/p&gt;

&lt;h3&gt;
  
  
  AWS Lambda + AWS Step Functions Cost
&lt;/h3&gt;

&lt;p&gt;In AWS Lambda, this application would likely be implemented as four functions orchestrated by an AWS Step Functions workflow. AWS Lambda &lt;a href="https://aws.amazon.com/lambda/pricing/" rel="noopener noreferrer"&gt;charges&lt;/a&gt; $0.20 per 1M function invocations plus $8.33 per 1M execution seconds (assuming a 512MB executor). Additionally, AWS Step Functions &lt;a href="https://aws.amazon.com/step-functions/pricing/" rel="noopener noreferrer"&gt;charges&lt;/a&gt; (using Express Workflows, the cheapest option) $1.00 per 1M workflow invocations plus $8.33 per 1M operation-seconds of execution. Thus the total cost for Lambda is $200 for the 1B function invocations (four per workflow) plus $83.33 for 10M execution seconds. The total cost for Step Functions is $250 for 250M workflow invocations plus $83.33 for 10M operation-seconds. The grand total is $616.66. As we said earlier in this post, that is over 15 times more expensive than DBOS Cloud. Here's a table to summarize our cost comparison:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs7v7jmrltmdy1xyju8l9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs7v7jmrltmdy1xyju8l9.png" alt="cost comparison table" width="791" height="278"&gt;&lt;/a&gt;&lt;br&gt;
‍&lt;/p&gt;

&lt;h2&gt;
  
  
  Try it out!
&lt;/h2&gt;

&lt;p&gt;To get started with DBOS, follow our &lt;a href="https://docs.dbos.dev/getting-started/quickstart" rel="noopener noreferrer"&gt;quickstart&lt;/a&gt; to download the open-source &lt;a href="https://github.com/dbos-inc/dbos-transact" rel="noopener noreferrer"&gt;DBOS Transact&lt;/a&gt; framework and start running code locally.&lt;/p&gt;

&lt;p&gt;If you have any questions about the article or about DBOS, please ask in the comments!&lt;/p&gt;

</description>
      <category>aws</category>
      <category>cloud</category>
      <category>typescript</category>
      <category>postgres</category>
    </item>
    <item>
      <title>How to Process Events Exactly-Once with Kafka and DBOS</title>
      <dc:creator>Peter Kraft</dc:creator>
      <pubDate>Fri, 03 May 2024 22:00:40 +0000</pubDate>
      <link>https://forem.com/dbos/how-to-process-events-exactly-once-with-kafka-and-dbos-2ael</link>
      <guid>https://forem.com/dbos/how-to-process-events-exactly-once-with-kafka-and-dbos-2ael</guid>
      <description>&lt;p&gt;Many applications built on event streaming systems like &lt;a href="https://kafka.apache.org/" rel="noopener noreferrer"&gt;Apache Kafka&lt;/a&gt; need exactly-once semantics for data integrity. In other words, they need to guarantee that events are processed exactly one time and are never under- or over-counted. However, exactly-once semantics are hard to obtain in distributed systems, so many event streaming platforms default to weaker guarantees. For example, Kafka natively supports exactly-once semantics only for &lt;a href="https://www.confluent.io/blog/exactly-once-semantics-are-possible-heres-how-apache-kafka-does-it/" rel="noopener noreferrer"&gt;stream processing&lt;/a&gt;, not for general applications. As a result, developers who need exactly-once semantics must invent &lt;a href="https://www.uber.com/blog/real-time-exactly-once-ad-event-processing/" rel="noopener noreferrer"&gt;complicated architectures&lt;/a&gt; to obtain it.&lt;/p&gt;

&lt;p&gt;In this post, we’ll show you how the open-source TypeScript framework &lt;a href="https://github.com/dbos-inc/dbos-transact" rel="noopener noreferrer"&gt;DBOS Transact&lt;/a&gt; simplifies makes exactly-once event processing easy. We’ll demonstrate how to write, in &amp;lt;15 lines of code, a Kafka consumer that processes every message sent to a topic exactly one time, regardless of interruptions, crashes, or failures.&lt;/p&gt;

&lt;h2&gt;
  
  
  How exactly-once Kafka message processing works with DBOS Transact
&lt;/h2&gt;

&lt;p&gt;Implementing exactly-once semantics in a distributed system is challenging because it’s &lt;a href="https://bravenewgeek.com/you-cannot-have-exactly-once-delivery/" rel="noopener noreferrer"&gt;theoretically impossible&lt;/a&gt;: the sender of a message cannot always verify whether an unresponsive receiver failed to process their message or is just being slow to acknowledge receipt. To get around this problem, a common strategy is to deliver a message multiple times, but design the receiver to process each event &lt;a href="https://developer.mozilla.org/en-US/docs/Glossary/Idempotent" rel="noopener noreferrer"&gt;idempotently&lt;/a&gt; so that duplicate message deliveries have no effect. However, developing idempotent applications is also notoriously difficult!&lt;/p&gt;

&lt;p&gt;DBOS Transact makes exactly-once event processing easy by making idempotency easy. You can process Kafka events exactly-once with two different types of operations:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://docs.dbos.dev/tutorials/transaction-tutorial" rel="noopener noreferrer"&gt;Transactions&lt;/a&gt;, which execute a single database transaction.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://docs.dbos.dev/tutorials/workflow-tutorial" rel="noopener noreferrer"&gt;Workflows&lt;/a&gt;, which &lt;a href="https://docs.dbos.dev/tutorials/workflow-tutorial#reliability-guarantees" rel="noopener noreferrer"&gt;reliably execute&lt;/a&gt; multiple operations.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;When your consumer receives a message from Kafka, DBOS Transact constructs an idempotency key for that message from the message’s topic, partition, and offset–a combination that is guaranteed to be unique for a Kafka cluster. What happens next depends on whether you’re executing a transaction or a workflow.&lt;/p&gt;

&lt;h2&gt;
  
  
  Exactly-once transactions
&lt;/h2&gt;

&lt;p&gt;If you’re processing the message with a single transaction, DBOS Transact synchronously executes the transaction, records its idempotency key &lt;strong&gt;as part of that transaction&lt;/strong&gt;, then commits the message’s offset to Kafka. This guarantees exactly-once processing because:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;If transaction execution fails, the offset isn’t committed, so Kafka retries the message later.&lt;/li&gt;
&lt;li&gt;If a duplicate message arrives, DBOS Transact checks the idempotency key and doesn’t re-execute the transaction (the transaction logic also handles corner cases where the message arrives twice simultaneously–a subject for a future blog post).&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Exactly-once workflows
&lt;/h2&gt;

&lt;p&gt;If you’re processing the message with a workflow, DBOS Transact synchronously records the idempotency key, commits the message’s offset to Kafka, then asynchronously processes the workflow. This guarantees exactly-once processing because:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;If recording the idempotency key fails, the offset isn’t committed, so Kafka will send the message again later.&lt;/li&gt;
&lt;li&gt;If the workflow fails, it will always recover and resume from where it left off because &lt;a href="https://docs.dbos.dev/tutorials/workflow-tutorial" rel="noopener noreferrer"&gt;DBOS workflows are reliable&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;If a duplicate message arrives, DBOS Transact checks the idempotency key and doesn’t start a new workflow.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Getting Started with Kafka and DBOS
&lt;/h2&gt;

&lt;p&gt;Now, let’s see what this looks like in code. The first step to exactly-once semantics is to write your event processing code in TypeScript using &lt;a href="https://github.com/dbos-inc/dbos-transact" rel="noopener noreferrer"&gt;DBOS Transact&lt;/a&gt;. You can write a single database &lt;a href="https://docs.dbos.dev/tutorials/transaction-tutorial" rel="noopener noreferrer"&gt;transaction&lt;/a&gt; or a &lt;a href="https://docs.dbos.dev/tutorials/workflow-tutorial" rel="noopener noreferrer"&gt;workflow&lt;/a&gt; coordinating multiple operations. For now, let’s keep it simple:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;Workflow&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;WorkflowContext&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;@dbos-inc/dbos-sdk&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="kd"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;KafkaExample&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="p"&gt;@&lt;/span&gt;&lt;span class="nd"&gt;Workflow&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; 
  &lt;span class="k"&gt;static&lt;/span&gt; &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="nf"&gt;kafkaWorkflow&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;ctxt&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;WorkflowContext&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;topic&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kr"&gt;string&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;partition&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kr"&gt;number&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;message&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;KafkaMessage&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;      
    &lt;span class="nx"&gt;ctxt&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;logger&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;info&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;`Message received: &lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;message&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;value&lt;/span&gt;&lt;span class="p"&gt;?.&lt;/span&gt;&lt;span class="nf"&gt;toString&lt;/span&gt;&lt;span class="p"&gt;()}&lt;/span&gt;&lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; 
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Next, annotate your method with a &lt;a href="https://docs.dbos.dev/api-reference/decorators#kafka-consume" rel="noopener noreferrer"&gt;@KafkaConsume&lt;/a&gt; decorator specifying which topic to consume events from. Additionally, annotate your class with an &lt;a href="https://docs.dbos.dev/api-reference/decorators#kafka" rel="noopener noreferrer"&gt;@Kafka&lt;/a&gt; decorator defining which brokers to connect to.  That’s all there is to it! When you start your application, DBOS Transact will invoke your method exactly-once for each message sent to the topic.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;KafkaConfig&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;KafkaMessage&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;kafkajs&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;Workflow&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;WorkflowContext&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;Kafka&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;KafkaConsume&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;@dbos-inc/dbos-sdk&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="p"&gt;@&lt;/span&gt;&lt;span class="nd"&gt;Kafka&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;brokers&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;localhost:9092&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;]})&lt;/span&gt;
&lt;span class="kd"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;KafkaExample&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="p"&gt;@&lt;/span&gt;&lt;span class="nd"&gt;KafkaConsume&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;example-topic&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
  &lt;span class="p"&gt;@&lt;/span&gt;&lt;span class="nd"&gt;Workflow&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
  &lt;span class="k"&gt;static&lt;/span&gt; &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="nf"&gt;kafkaWorkflow&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;ctxt&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;WorkflowContext&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;topic&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kr"&gt;string&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;partition&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kr"&gt;number&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;message&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;KafkaMessage&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;ctxt&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;logger&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;info&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;`Message received: &lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;message&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;value&lt;/span&gt;&lt;span class="p"&gt;?.&lt;/span&gt;&lt;span class="nf"&gt;toString&lt;/span&gt;&lt;span class="p"&gt;()}&lt;/span&gt;&lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; 
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you need more control, you can pass &lt;a href="https://kafka.js.org/" rel="noopener noreferrer"&gt;KafkaJS&lt;/a&gt; configuration objects to both the class and method Kafka decorators. For example, you can specify a custom consumer group ID:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="p"&gt;@&lt;/span&gt;&lt;span class="nd"&gt;KafkaConsume&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;example-topic&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;groupId&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;custom-group-id&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt; &lt;span class="p"&gt;})&lt;/span&gt;
&lt;span class="p"&gt;@&lt;/span&gt;&lt;span class="nd"&gt;Workflow&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;span class="k"&gt;static&lt;/span&gt; &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="nf"&gt;kafkaWorkflow&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;ctxt&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;WorkflowContext&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;topic&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kr"&gt;string&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;partition&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kr"&gt;number&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;message&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;KafkaMessage&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; 
  &lt;span class="nx"&gt;ctxt&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;logger&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;info&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;`Message received: &lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;message&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;value&lt;/span&gt;&lt;span class="p"&gt;?.&lt;/span&gt;&lt;span class="nf"&gt;toString&lt;/span&gt;&lt;span class="p"&gt;()}&lt;/span&gt;&lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To get started with DBOS Transact, check out the &lt;a href="https://docs.dbos.dev/getting-started/quickstart" rel="noopener noreferrer"&gt;quickstart&lt;/a&gt; and &lt;a href="https://docs.dbos.dev/" rel="noopener noreferrer"&gt;docs&lt;/a&gt;. For more information on using Kafka with DBOS Transact, check out &lt;a href="https://docs.dbos.dev/tutorials/kafka-integration" rel="noopener noreferrer"&gt;this docs page&lt;/a&gt;. To join our community:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Check out our code and contribute &lt;a href="https://github.com/dbos-inc/dbos-transact" rel="noopener noreferrer"&gt;on GitHub&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;Join us &lt;a href="https://discord.gg/jsmC6pXGgX" rel="noopener noreferrer"&gt;on Discord&lt;/a&gt;.&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>kafka</category>
      <category>typescript</category>
      <category>database</category>
      <category>node</category>
    </item>
  </channel>
</rss>
