<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Anton Mihaylov</title>
    <description>The latest articles on Forem by Anton Mihaylov (@antonmihaylov).</description>
    <link>https://forem.com/antonmihaylov</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/antonmihaylov"/>
    <language>en</language>
    <item>
      <title>Implementing a Retry and DLQ Strategy in NATS JetStream</title>
      <dc:creator>Anton Mihaylov</dc:creator>
      <pubDate>Sat, 29 Nov 2025 22:00:00 +0000</pubDate>
      <link>https://forem.com/antonmihaylov/implementing-a-retry-and-dlq-strategy-in-nats-jetstream-4k2k</link>
      <guid>https://forem.com/antonmihaylov/implementing-a-retry-and-dlq-strategy-in-nats-jetstream-4k2k</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqf40tf0buo7amxqbdlc2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqf40tf0buo7amxqbdlc2.png" alt=" " width="800" height="444"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you are moving from RabbitMQ or Kafka to NATS JetStream, you usually feel a sense of relief. The configuration is simpler, the binaries are smaller, and the performance is excellent.&lt;/p&gt;

&lt;p&gt;But eventually, you hit the "Day 2" reality of event-driven architectures: Message Failures.&lt;/p&gt;

&lt;p&gt;When a consumer fails to process a message, you can't just drop it. You need a retry policy, and eventually, a Dead Letter Queue (DLQ). While NATS provides all the primitives to build this, it does not provide a "system" out of the box. You have to wire it yourself.&lt;/p&gt;

&lt;p&gt;This guide walks through how to architect a robust DLQ flow using native JetStream features, and how to manually handle message replays when things go wrong.&lt;/p&gt;

&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;NATS Server with JetStream enabled (&lt;code&gt;nats-server -js&lt;/code&gt;)&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;nats&lt;/code&gt; CLI installed&lt;/li&gt;
&lt;li&gt;Basic understanding of NATS streams and consumers&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  The Architecture of Failure
&lt;/h2&gt;

&lt;p&gt;In JetStream, a DLQ isn't a special checkbox; it's just another Stream. To set this up correctly, you need a specific topology involving &lt;code&gt;MaxDeliver&lt;/code&gt;, proper acknowledgment handling, and advisory message capture.&lt;/p&gt;

&lt;h3&gt;
  
  
  1. The Retry Policy
&lt;/h3&gt;

&lt;p&gt;First, your consumer configuration determines how many times NATS attempts to deliver a message before giving up.&lt;/p&gt;

&lt;p&gt;When you configure your consumer, pay attention to &lt;code&gt;MaxDeliver&lt;/code&gt; and &lt;code&gt;BackOff&lt;/code&gt;. A linear retry is rarely what you want. You want a geometric backoff to prevent thundering herd scenarios on your database.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;nats consumer add ORDERS processing_worker &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--max-deliver&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;5 &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--backoff&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;1s,5s,1m,5m,10m
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Important:&lt;/strong&gt; The &lt;code&gt;BackOff&lt;/code&gt; configuration only applies to &lt;em&gt;acknowledgment timeouts&lt;/em&gt;, not to explicit NAK responses. When a consumer sends a &lt;code&gt;Nak()&lt;/code&gt;, the message is redelivered immediately. To add delay on an explicit NAK, use &lt;code&gt;NakWithDelay()&lt;/code&gt; in your client code:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight go"&gt;&lt;code&gt;&lt;span class="c"&gt;// Immediate redelivery&lt;/span&gt;
&lt;span class="n"&gt;msg&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Nak&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;

&lt;span class="c"&gt;// Delayed redelivery (recommended for transient failures)&lt;/span&gt;
&lt;span class="n"&gt;msg&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;NakWithDelay&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="m"&gt;5&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="n"&gt;time&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Second&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  2. The Dead Letter Stream
&lt;/h3&gt;

&lt;p&gt;Once &lt;code&gt;MaxDeliver&lt;/code&gt; is reached, the consumer stops attempting to deliver that specific message. &lt;strong&gt;The message is not deleted&lt;/strong&gt;—it remains in the stream but is simply skipped by that consumer. To capture these failures, we use JetStream's advisory system.&lt;/p&gt;

&lt;p&gt;When a message exhausts its delivery attempts, NATS publishes an advisory to:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$JS.EVENT.ADVISORY.CONSUMER.MAX_DELIVERIES.&amp;lt;STREAM&amp;gt;.&amp;lt;CONSUMER&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Similarly, when your application explicitly terminates a message using &lt;code&gt;Term()&lt;/code&gt;, an advisory is published to:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$JS.EVENT.ADVISORY.CONSUMER.MSG_TERMINATED.&amp;lt;STREAM&amp;gt;.&amp;lt;CONSUMER&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Setting Up the DLQ Stream
&lt;/h4&gt;

&lt;p&gt;Create a stream that captures these advisory messages:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;nats stream add DLQ_ORDERS &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--subjects&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s1"&gt;'$JS.EVENT.ADVISORY.CONSUMER.MAX_DELIVERIES.ORDERS.*,$JS.EVENT.ADVISORY.CONSUMER.MSG_TERMINATED.ORDERS.*'&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--storage&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;file &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--retention&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;limits &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--max-age&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;72h
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The advisory payload looks like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"type"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"io.nats.jetstream.advisory.v1.max_deliver"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"id"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"EgGS1FReM5Yy1I7L9UXs4W"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"timestamp"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"2023-12-24T14:38:13.107632319Z"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"stream"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"ORDERS"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"consumer"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"processing_worker"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"stream_seq"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;6&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"deliveries"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;5&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;stream_seq&lt;/code&gt; field is critical—it tells you exactly which message failed, and you can retrieve it directly:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;nats stream get ORDERS 6
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  3. Using Term for Explicit Dead-Lettering
&lt;/h3&gt;

&lt;p&gt;Instead of waiting for &lt;code&gt;MaxDeliver&lt;/code&gt; to be exhausted, your consumer can explicitly terminate a message using &lt;code&gt;Term()&lt;/code&gt;. This immediately stops redelivery and publishes an advisory.&lt;/p&gt;

&lt;p&gt;Use this when your application can determine a message is permanently unprocessable:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight go"&gt;&lt;code&gt;&lt;span class="c"&gt;// Message is malformed or missing required data&lt;/span&gt;
&lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="o"&gt;!&lt;/span&gt;&lt;span class="n"&gt;isValidPayload&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;msg&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Data&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="n"&gt;msg&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Term&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;  &lt;span class="c"&gt;// Immediately dead-letter, don't retry&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  The "Hard Way" (Manual) Replay Workflow
&lt;/h2&gt;

&lt;p&gt;Let's assume you have a setup where failed messages have triggered advisories in &lt;code&gt;DLQ_ORDERS&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;It's 3:00 AM. An invalid payload caused 500 messages to hit the DLQ. You've pushed a hotfix to the consumer code to handle this payload. Now, you need to replay those 500 messages.&lt;/p&gt;

&lt;p&gt;NATS does not have a "Replay" button. You have to do this via the CLI or a custom script. Here is the manual workflow for replaying messages without data loss.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 1: Inspect the DLQ Advisories
&lt;/h3&gt;

&lt;p&gt;First, view the advisory messages to understand what failed:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;nats stream view DLQ_ORDERS 10
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Each advisory contains the &lt;code&gt;stream_seq&lt;/code&gt; of the original message. To inspect the actual failed message:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;nats stream get ORDERS &amp;lt;stream_seq&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Look at the headers and payload to understand why processing failed.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 2: Republishing
&lt;/h3&gt;

&lt;p&gt;You need to read the failed messages and publish them back to the original subject (&lt;code&gt;orders.created&lt;/code&gt;), not the DLQ subject.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; The &lt;code&gt;nats stream backup&lt;/code&gt; and &lt;code&gt;nats stream restore&lt;/code&gt; commands are designed for disaster recovery, not for selective message replay. They create binary snapshots and restore to the &lt;em&gt;same&lt;/em&gt; stream, which isn't what we want here.&lt;/p&gt;

&lt;p&gt;For a single message, you can replay it directly:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;seq&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;6  &lt;span class="c"&gt;# The stream_seq from the advisory&lt;/span&gt;
&lt;span class="nv"&gt;SOURCE_STREAM&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"ORDERS"&lt;/span&gt;
&lt;span class="nv"&gt;msg&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;nats stream get &lt;span class="nv"&gt;$SOURCE_STREAM&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$seq&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="nt"&gt;--json&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;
&lt;span class="nv"&gt;subject&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$msg&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; | jq &lt;span class="nt"&gt;-r&lt;/span&gt; &lt;span class="s1"&gt;'.subject'&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;
&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$msg&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; | jq &lt;span class="nt"&gt;-r&lt;/span&gt; &lt;span class="s1"&gt;'.data'&lt;/span&gt; | &lt;span class="nb"&gt;base64&lt;/span&gt; &lt;span class="nt"&gt;-d&lt;/span&gt; | nats pub &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$subject&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For batch replay of all DLQ advisories:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;#!/bin/bash&lt;/span&gt;
&lt;span class="c"&gt;# Replay all failed messages from DLQ advisories&lt;/span&gt;

&lt;span class="nv"&gt;DLQ_STREAM&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"DLQ_ORDERS"&lt;/span&gt;
&lt;span class="nv"&gt;SOURCE_STREAM&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"ORDERS"&lt;/span&gt;

&lt;span class="c"&gt;# Get both first and last sequence numbers&lt;/span&gt;
&lt;span class="nv"&gt;info&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;nats stream info &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$DLQ_STREAM&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="nt"&gt;--json&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;
&lt;span class="nv"&gt;first_seq&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$info&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; | jq &lt;span class="nt"&gt;-r&lt;/span&gt; &lt;span class="s1"&gt;'.state.first_seq'&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;
&lt;span class="nv"&gt;last_seq&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$info&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; | jq &lt;span class="nt"&gt;-r&lt;/span&gt; &lt;span class="s1"&gt;'.state.last_seq'&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;

&lt;span class="k"&gt;for &lt;/span&gt;i &lt;span class="k"&gt;in&lt;/span&gt; &lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;seq&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$first_seq&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$last_seq&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;do&lt;/span&gt;
    &lt;span class="c"&gt;# Get advisory and extract source stream sequence&lt;/span&gt;
    &lt;span class="nv"&gt;advisory&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;nats stream get &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$DLQ_STREAM&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$i&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="nt"&gt;--json&lt;/span&gt; 2&amp;gt;/dev/null&lt;span class="si"&gt;)&lt;/span&gt; &lt;span class="o"&gt;||&lt;/span&gt; &lt;span class="k"&gt;continue
    &lt;/span&gt;&lt;span class="nv"&gt;stream_seq&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$advisory&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; | jq &lt;span class="nt"&gt;-r&lt;/span&gt; &lt;span class="s1"&gt;'.data'&lt;/span&gt; | &lt;span class="nb"&gt;base64&lt;/span&gt; &lt;span class="nt"&gt;-d&lt;/span&gt; | jq &lt;span class="nt"&gt;-r&lt;/span&gt; &lt;span class="s1"&gt;'.stream_seq'&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;

    &lt;span class="c"&gt;# Get original message&lt;/span&gt;
    &lt;span class="nv"&gt;original&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;nats stream get &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$SOURCE_STREAM&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$stream_seq&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="nt"&gt;--json&lt;/span&gt; 2&amp;gt;/dev/null&lt;span class="si"&gt;)&lt;/span&gt; &lt;span class="o"&gt;||&lt;/span&gt; &lt;span class="k"&gt;continue
    &lt;/span&gt;&lt;span class="nv"&gt;subject&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$original&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; | jq &lt;span class="nt"&gt;-r&lt;/span&gt; &lt;span class="s1"&gt;'.subject'&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;
    &lt;span class="nv"&gt;data&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$original&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; | jq &lt;span class="nt"&gt;-r&lt;/span&gt; &lt;span class="s1"&gt;'.data'&lt;/span&gt; | &lt;span class="nb"&gt;base64&lt;/span&gt; &lt;span class="nt"&gt;-d&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;

    &lt;span class="c"&gt;# Republish&lt;/span&gt;
    &lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$data&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; | nats pub &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$subject&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
    &lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"Replayed seq=&lt;/span&gt;&lt;span class="nv"&gt;$stream_seq&lt;/span&gt;&lt;span class="s2"&gt; to &lt;/span&gt;&lt;span class="nv"&gt;$subject&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
&lt;span class="k"&gt;done&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  The Friction Points
&lt;/h3&gt;

&lt;p&gt;If you are doing this manually during an incident, you face several issues:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Deduplication edge case:&lt;/strong&gt; If you use explicit &lt;code&gt;Nats-Msg-Id&lt;/code&gt; headers for exactly-once publishing AND you're replaying within the dedup window (default 2 minutes), the message will be rejected. In practice, DLQ messages are usually hours old, so this rarely matters.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Ordering:&lt;/strong&gt; If strict ordering matters, replaying from a script breaks the global order relative to new incoming messages.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Visibility:&lt;/strong&gt; While running your script, you have no visual confirmation of the drain rate other than repeatedly running &lt;code&gt;nats stream info&lt;/code&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Multi-Consumer Problem
&lt;/h3&gt;

&lt;p&gt;There's a subtle but serious issue with the replay workflow above: &lt;strong&gt;the original message is still in the source stream&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;When you replay a message by publishing it to the original subject, you're creating a &lt;em&gt;new&lt;/em&gt; message. If multiple consumers subscribe to the same stream—common in microservice architectures where different services react to the same events—this causes duplicate processing.&lt;/p&gt;

&lt;p&gt;Consider this scenario:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;code&gt;ORDERS&lt;/code&gt; stream has two consumers: &lt;code&gt;billing_worker&lt;/code&gt; and &lt;code&gt;notification_worker&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;billing_worker&lt;/code&gt; fails to process message seq=42 (database timeout) and it hits MaxDeliver&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;notification_worker&lt;/code&gt; successfully processed seq=42 and sent a confirmation email&lt;/li&gt;
&lt;li&gt;You replay seq=42 by publishing to &lt;code&gt;orders.created&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Both&lt;/strong&gt; consumers receive the replayed message&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;notification_worker&lt;/code&gt; sends a duplicate email&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;This is particularly dangerous for side-effecting operations: duplicate charges, duplicate notifications, or duplicate API calls to external systems.&lt;/p&gt;

&lt;h4&gt;
  
  
  Workaround 1: Dedicated Replay Subject
&lt;/h4&gt;

&lt;p&gt;Configure your consumers to subscribe to multiple subjects, including a replay-specific one:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Original consumer subscribes to both subjects&lt;/span&gt;
nats consumer add ORDERS billing_worker &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--filter&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s1"&gt;'orders.created,orders.created.replay.billing'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;When replaying, publish to the consumer-specific replay subject:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$data&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; | nats pub &lt;span class="s2"&gt;"orders.created.replay.billing"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This ensures only the failed consumer receives the replayed message.&lt;/p&gt;

&lt;h4&gt;
  
  
  Workaround 2: Replay Headers with Consumer Filtering
&lt;/h4&gt;

&lt;p&gt;Add metadata to replayed messages so consumers can detect and handle them:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Add replay headers when republishing&lt;/span&gt;
&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$data&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; | nats pub &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$subject&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--header&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"X-Replay: true"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--header&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"X-Original-Seq: &lt;/span&gt;&lt;span class="nv"&gt;$stream_seq&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--header&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"X-Target-Consumer: billing_worker"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Consumers check the header and skip if they're not the target:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight go"&gt;&lt;code&gt;&lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;msg&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Header&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"X-Replay"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;==&lt;/span&gt; &lt;span class="s"&gt;"true"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="n"&gt;target&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="n"&gt;msg&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Header&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"X-Target-Consumer"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;target&lt;/span&gt; &lt;span class="o"&gt;!=&lt;/span&gt; &lt;span class="s"&gt;""&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="n"&gt;target&lt;/span&gt; &lt;span class="o"&gt;!=&lt;/span&gt; &lt;span class="n"&gt;consumerName&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="n"&gt;msg&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Ack&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;  &lt;span class="c"&gt;// Not for us, acknowledge and skip&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Workaround 3: Idempotent Consumers (The Right Way)
&lt;/h4&gt;

&lt;p&gt;The most robust solution is designing consumers to be idempotent from the start. Store a processed message log keyed by &lt;code&gt;stream_seq&lt;/code&gt; or a business-level identifier:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight go"&gt;&lt;code&gt;&lt;span class="k"&gt;func&lt;/span&gt; &lt;span class="n"&gt;processOrder&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;msg&lt;/span&gt; &lt;span class="n"&gt;jetstream&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Msg&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="kt"&gt;error&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="n"&gt;orderID&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="n"&gt;extractOrderID&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;msg&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Data&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt;

    &lt;span class="c"&gt;// Check if already processed&lt;/span&gt;
    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;wasProcessed&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;orderID&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;msg&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Ack&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;  &lt;span class="c"&gt;// Idempotent: safe to skip&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;

    &lt;span class="c"&gt;// Process and mark as done atomically&lt;/span&gt;
    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;err&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="n"&gt;processAndMarkDone&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;orderID&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;msg&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Data&lt;/span&gt;&lt;span class="p"&gt;());&lt;/span&gt; &lt;span class="n"&gt;err&lt;/span&gt; &lt;span class="o"&gt;!=&lt;/span&gt; &lt;span class="no"&gt;nil&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;msg&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;NakWithDelay&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="m"&gt;5&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="n"&gt;time&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Second&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;

    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;msg&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Ack&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This protects against replays, network retries, and JetStream's at-least-once delivery semantics.&lt;/p&gt;

&lt;h2&gt;
  
  
  Automating the Pain
&lt;/h2&gt;

&lt;p&gt;The workflow above—detect, inspect, script, replay—is standard practice. It works. But it is slow, and writing one-off scripts during an outage is stress-inducing.&lt;/p&gt;

&lt;p&gt;This is exactly why we built StreamTrace.&lt;/p&gt;

&lt;p&gt;We realized that managing NATS JetStream shouldn't require writing bash scripts at 3 AM just to move a message from one stream to another. StreamTrace is a lightweight UI that sits on top of your NATS clusters.&lt;/p&gt;

&lt;p&gt;It turns the "Hard Way" into a UI workflow:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Visual DLQ Management:&lt;/strong&gt; View your DLQ stream, click the specific messages (or "Select All"), and hit Replay.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Payload Inspection:&lt;/strong&gt; Decode JSON, Text, or Hex payloads right in the browser to debug why the failure happened before you replay.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Real-time Progress:&lt;/strong&gt; Watch messages drain from your DLQ without repeatedly running &lt;code&gt;nats stream info&lt;/code&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;It is a self-hosted Docker container that connects directly to your NATS server. No cloud data leakage, no complex setup.&lt;/p&gt;

&lt;p&gt;If you are tired of writing restoration scripts, check out &lt;a href="https://streamtrace.io" rel="noopener noreferrer"&gt;Stream Trace&lt;/a&gt;.&lt;/p&gt;




&lt;h2&gt;
  
  
  Further Reading
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://docs.nats.io/nats-concepts/jetstream/consumers" rel="noopener noreferrer"&gt;Consumer Configuration&lt;/a&gt; — MaxDeliver, BackOff, AckWait&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://docs.nats.io/using-nats/developer/develop_jetstream/consumers" rel="noopener noreferrer"&gt;Consumer Acknowledgments&lt;/a&gt; — Ack, Nak, NakWithDelay, Term&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://docs.nats.io/using-nats/developer/develop_jetstream/model_deep_dive" rel="noopener noreferrer"&gt;JetStream Model Deep Dive&lt;/a&gt; — Internal protocol details&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://docs.nats.io/running-a-nats-service/nats_admin/jetstream_admin/disaster_recovery" rel="noopener noreferrer"&gt;Disaster Recovery&lt;/a&gt; — Backup and restore workflows&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>microservices</category>
      <category>pubsub</category>
      <category>eventdriven</category>
    </item>
    <item>
      <title>Which is the best Speech-to-text AI in 2022?</title>
      <dc:creator>Anton Mihaylov</dc:creator>
      <pubDate>Mon, 19 Dec 2022 00:00:00 +0000</pubDate>
      <link>https://forem.com/antonmihaylov/which-is-the-best-speech-to-text-ai-in-2022-2o7k</link>
      <guid>https://forem.com/antonmihaylov/which-is-the-best-speech-to-text-ai-in-2022-2o7k</guid>
      <description>&lt;p&gt;As Artificial Intelligence (AI) continues to evolve and become increasingly integrated into our everyday lives, speech to text is becoming more and more prevalent. In 2022, there will be a wide array of speech to text AI’s that are available for developers to use.&lt;/p&gt;

&lt;p&gt;In this article I’ll share my findings on choosing the most suitable AI Speech-to-text engine.&lt;/p&gt;

&lt;p&gt;In order to do that, let me first define a couple of criteria that I find most valuable. Those of course might be different for your needs, for example you may value ease of use more than cost, which will skew the decision in favor of cloud-based solutions.&lt;/p&gt;

&lt;p&gt;Here are my main decision drivers, ordered by importance:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Result accuracy - &lt;code&gt;Very important&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Cost - &lt;code&gt;Very important&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Ease of use - &lt;code&gt;Somewhat important&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Performance - &lt;code&gt;Somewhat important&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Extra features - speaker recognition, punctuation, profanity masking, automatic chaptering - &lt;code&gt;Nice to have&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;And here are the contenders:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Assembly AI&lt;/li&gt;
&lt;li&gt;Amazon Transcribe&lt;/li&gt;
&lt;li&gt;Google Speech To Text&lt;/li&gt;
&lt;li&gt;Vosk&lt;/li&gt;
&lt;li&gt;OpenAI’s Whisper&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Note that I’m not affiliated in any way with any of these and the information bellow is the result of my personal findings when deciding which technology to use for a project. And I’m not even close to being a machine learning expert, so this is the point of view of an web developer that wants to simply use those technologies.&lt;/p&gt;

&lt;p&gt;I used this YouTube video, which is around 10 minutes long and contains generic dialog as well as a lot of technical terms or names.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=G5mtQhWNezQ"&gt;Best Python IDE: Vim, Emacs, PyCharm, or Visual Studio Code? | Guido van Rossum and Lex Fridman&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Findings
&lt;/h2&gt;

&lt;h3&gt;
  
  
  1. Assembly AI
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://www.assemblyai.com/"&gt;AssemblyAI | #1 API Platform for AI Models&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Assembly AI is a cloud-based solution that provides not only text-to-speech but also a variety of other NLP services.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--5Ljbg3C2--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://antomic.net/images/blog/which-speech-to-text-ai-is-the-best-in-2022/AssemblyAiFeatures.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--5Ljbg3C2--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://antomic.net/images/blog/which-speech-to-text-ai-is-the-best-in-2022/AssemblyAiFeatures.jpg" alt="AssemblyAiFeatures.jpg" width="880" height="483"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Here’s a link to their playground that I used for testing it &lt;a href="https://www.assemblyai.com/playground"&gt;https://www.assemblyai.com/playground&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Result accuracy - &lt;strong&gt;Amazing&lt;/strong&gt; , almost no issues, even when dealing with technical terms or names. It capitalizes accurately and has proper punctuation.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--vsYPYsIF--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://antomic.net/images/blog/which-speech-to-text-ai-is-the-best-in-2022/AssemblyAiResult.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--vsYPYsIF--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://antomic.net/images/blog/which-speech-to-text-ai-is-the-best-in-2022/AssemblyAiResult.png" alt="AssemblyAiResult.png" width="880" height="925"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Cost - it’s a cloud-based service, so it’s natural to be &lt;strong&gt;quite expensive&lt;/strong&gt;. It’s comparable to other cloud based solutions at &lt;strong&gt;$0.015/minute&lt;/strong&gt; , but that’s only for the core transcription service. If you need the extra Audio Intelligence features you need &lt;strong&gt;quite a bit more budget&lt;/strong&gt; with the cost of &lt;strong&gt;$0.035/minute&lt;/strong&gt;. But of course this could be worth it for the result accuracy and the peace of mind that comes with using a managed service&lt;/li&gt;
&lt;li&gt;Ease of use - the plus side of cloud-based solutions is that it’s &lt;strong&gt;very easy to use&lt;/strong&gt; them - you just call an API and you’re done - no infrastructure to manage.&lt;/li&gt;
&lt;li&gt;Performance - they advertise it as around 20% of the running time, which sounds &lt;strong&gt;very good&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Extra features - it has the &lt;strong&gt;fullest audio intelligence feature set&lt;/strong&gt; of all options, if you have the budget for the more expensive plan&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Verdict: &lt;strong&gt;Assembly AI’s quality and large feature set would be my choice if I needed a top-notch cloud-based solution and budget wasn’t a problem.&lt;/strong&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Amazon Transcribe
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://aws.amazon.com/transcribe/"&gt;Amazon Transcribe – Speech to Text - AWS&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This is AWS’s speech to text solution, which might be the obvious choice for many solutions that are already in the AWS ecosystem in one way or another.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Result accuracy - it’s a &lt;strong&gt;decent&lt;/strong&gt; , way worse than Assembly AI, but workable enough for simple use cases. It has proper punctuation and mostly good capitalization. It doesn’t handle technical terms or names well&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--xa5M2lKX--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://antomic.net/images/blog/which-speech-to-text-ai-is-the-best-in-2022/AmazonTranscribeResult.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--xa5M2lKX--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://antomic.net/images/blog/which-speech-to-text-ai-is-the-best-in-2022/AmazonTranscribeResult.png" alt="AmazonTranscribeResult.png" width="880" height="1076"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Cost - it is also a cloud-based solution, so it’s &lt;strong&gt;quite costly&lt;/strong&gt; at &lt;strong&gt;$0.024/minute.&lt;/strong&gt; But the cost goes down the more minutes you use per month, down to &lt;strong&gt;$0,0077/minute&lt;/strong&gt; if you use more than 10 million minutes per month. Which is a &lt;strong&gt;good deal if you have a lot of audio to transcribe&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Performance - &lt;strong&gt;very good&lt;/strong&gt; - the 10-minute test video was processed for 2 minutes and 20 seconds, which is around 20% of the running time.&lt;/li&gt;
&lt;li&gt;Extra features - you have profanity masking, speaker recognition out of the box&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Verdict: &lt;strong&gt;I would pick Amazon Transcribe if there’s a large volume of audio to transcribe each month and it doesn’t matter that the quality is not as good as it can be.&lt;/strong&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Google Speech-to-Text
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://cloud.google.com/speech-to-text/"&gt;Speech-to-Text: Automatic Speech Recognition  |  Google Cloud&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Google’s solution is widely known and, in general, Google has a reputation of delivering top-notch AI solutions. There’s a free demo on the page where I uploaded my video and saw the results right away.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Result accuracy - &lt;strong&gt;Good&lt;/strong&gt; , not as good as AssemblyAI, but definitely better than Amazon. Handles technical terms good most of the time, but misses them here and there.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--x52vuz2L--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://antomic.net/images/blog/which-speech-to-text-ai-is-the-best-in-2022/GoogleResult.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--x52vuz2L--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://antomic.net/images/blog/which-speech-to-text-ai-is-the-best-in-2022/GoogleResult.png" alt="GoogleResult.png" width="880" height="581"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Cost - it is also a cloud-based solution, so it’s &lt;strong&gt;quite costly&lt;/strong&gt; at &lt;strong&gt;$0.016/minute&lt;/strong&gt; , and that is the cheaper option, which allows Google to record your sent audio and use it to “help improve the machine learning models”. It has as more expensive option at &lt;strong&gt;$0.024/minute&lt;/strong&gt; , which is more private.&lt;/li&gt;
&lt;li&gt;Ease of use - again, as a cloud-based solutions it’s &lt;strong&gt;very easy to use&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Performance - the test I did had &lt;strong&gt;decent performance&lt;/strong&gt; , considering it’s a free demo page - a 1 minute audio (that was the max demo length) was done in around 30s, with speaker recognition and punctuation turned on&lt;/li&gt;
&lt;li&gt;Extra features - you get &lt;strong&gt;speaker recognition&lt;/strong&gt; and &lt;strong&gt;punctuation&lt;/strong&gt; readily available&lt;/li&gt;
&lt;li&gt;Other pros - it’s Google, so there’s a higher chance that it will be more available and reliable as a service than other providers given their massive infrastructure and talent&lt;/li&gt;
&lt;li&gt;Other cons - it’s Google, and a lot of people have privacy concerns with that.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Verdict: Google’s solution is a pretty stable choice if you want a cloud based solution, but you’d have to really consider if going with Assembly AI isn’t worth it, since it has better quality.&lt;/p&gt;

&lt;h3&gt;
  
  
  4. Vosk
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://alphacephei.com/vosk/"&gt;VOSK Offline Speech Recognition API&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This is the first open source option on the list. As such, it’s completely free to use, but you have to manage your own infrastructure to do that. This might be considered a pro to some people, because of the control and privacy, and a con to other, because of the effort needed to set it up and manage it. But it has a pretty active community, so I’m sure if you encounter any problems you will not find it hard to find solutions.&lt;/p&gt;

&lt;p&gt;Another big plus to Vosk is that there’s a Web Assembly version, which means you can run it in the browser if you need. It’s not as fast and it required the user to download a pretty large model file, but it can be a great way to save costs, be private, and have a performant solution, by offloading the work to the user’s machine. You can test the web demo here &lt;a href="https://ccoreilly.github.io/vosk-browser/"&gt;https://ccoreilly.github.io/vosk-browser/&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Result accuracy - &lt;strong&gt;decent&lt;/strong&gt; , similar to Amazon Transcribe. Note that I tested larger models, the smaller 40mb models is not that good and is worse than Amazon Transcribe. There is no capitalization or punctuation, but if you want to tinker with it you can use other wasm solutions to handle that&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--cshpLYGh--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://antomic.net/images/blog/which-speech-to-text-ai-is-the-best-in-2022/VoskResult.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--cshpLYGh--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://antomic.net/images/blog/which-speech-to-text-ai-is-the-best-in-2022/VoskResult.png" alt="VoskResult.png" width="880" height="1196"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Cost - it’s a self-hosted solution, so in general it’s &lt;strong&gt;less expensive&lt;/strong&gt; than cloud-based counterparts. But of course you have to factor in the cost of managing it. You’re not really paying for a service or an application, you’d be paying for the servers than run it. Or you can run it client-side in the browser &lt;strong&gt;completely free&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Ease of use - as self-hosted solution it’s &lt;strong&gt;harder to set it up&lt;/strong&gt; and you have to manage the servers that run it. You don’t have to be a machine learning expert to use it, any mid backend engineer should be able to set it up, but you can’t compare it to just calling an external API.&lt;/li&gt;
&lt;li&gt;Performance - I tested the WASM version, so the actual one might be a bit faster. In the browser I got a speed of around &lt;strong&gt;20% of the running time&lt;/strong&gt; on my Macbook M1 Pro, which is &lt;strong&gt;very good&lt;/strong&gt; and comparable to the cloud vendors.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Verdict: &lt;strong&gt;it’s a decent open source solution and a great one if you need to run it in the browser. But I&lt;/strong&gt; ’ &lt;strong&gt;d look more at the next option otherwise&lt;/strong&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  5. Whisper by OpenAI
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://github.com/openai/whisper"&gt;GitHub - openai/whisper: Robust Speech Recognition via Large-Scale Weak Supervision&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Whisper is a relatively new entry, made by the OpenAI team, which are behind one of the most cutting edge technologies in the AI space right now&lt;/p&gt;

&lt;p&gt;It’s also open source so, you can read the introduction to Vosk if you skipped it.&lt;/p&gt;

&lt;p&gt;There’s a C++ port of Whisper, that is compilable to Web Assembly, but it’s not official and is just a hobby project for now &lt;a href="https://github.com/ggerganov/whisper.cpp"&gt;https://github.com/ggerganov/whisper.cpp&lt;/a&gt; . So you can technically run it in the browser as well, but it’s questionable if that’s the right choice if you’re looking for a long-term maintainable and stable solution. Of course if needed you can also take the initiative to help the project if you have enough ML, C++ and WASM knowledge. There’s one big benefit of using that instead of the Vosk’s wasm version - that the Whisper’s models are a lot smaller, which is a very good thing, because you have to download them on every browser that uses it.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Result accuracy - &lt;strong&gt;amazing&lt;/strong&gt; , comparable to Assembly AI, even with the smaller models. And you can increase the timing precision, so that, for example you get the timestamps of each word.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--R4na3alj--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://antomic.net/images/blog/which-speech-to-text-ai-is-the-best-in-2022/WhisperResult.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--R4na3alj--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://antomic.net/images/blog/which-speech-to-text-ai-is-the-best-in-2022/WhisperResult.png" alt="WhisperResult.png" width="880" height="401"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Cost - again, as a self-hosted solution, it’s &lt;strong&gt;less expensive&lt;/strong&gt; than cloud-based counterparts. But of course you have to factor in the cost of managing it. You’re not really paying for a service or an application, you’d be paying for the servers than run it. Or you can run it client-side in the browser &lt;strong&gt;completely free&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Ease of use - again, as self-hosted solution it’s &lt;strong&gt;harder to set it up&lt;/strong&gt; and you have to manage the servers that run it&lt;/li&gt;
&lt;li&gt;Performance - around &lt;strong&gt;20% of the running time&lt;/strong&gt; on my Macbook M1 Pro, which is &lt;strong&gt;very good&lt;/strong&gt; and comparable to the cloud vendors.&lt;/li&gt;
&lt;li&gt;Extra features - no speaker recognition, but you can use other tools to do that &lt;a href="https://github.com/openai/whisper/discussions/264"&gt;https://github.com/openai/whisper/discussions/264&lt;/a&gt; .&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Verdict: &lt;strong&gt;it’s an amazing open source project and that would be my go-to option if you’re looking for a self-hosted or browser-based solution.&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;In conclusion, when it comes to finding the best Speech-to-Text AI for 2022, there are a variety of options to choose from, each with it’s own use cases. Deciding which to use depends on your requirements, budget and use case.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Assembly AI is&lt;/strong&gt; a top-notch cloud-based solution that provides an excellent result accuracy and comes with a wide range of audio intelligence features.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Amazon Transcribe&lt;/strong&gt; is a good choice for those who need to transcribe large volumes of audio, but have a lower requirement for accuracy.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Google Speech-to-Text&lt;/strong&gt; offers good result accuracy, as well as speaker recognition and punctuation out of the box.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Vosk&lt;/strong&gt; is a self-hosted open-source solution that’s comparable to Amazon Transcribe in quality.&lt;/p&gt;

&lt;p&gt;Finally, &lt;strong&gt;Whisper by OpenAI&lt;/strong&gt; is another open-source solution that offers amazing accuracy and is a great choice for those looking for a self-hosted or browser-based solution. This will be my choice for the project.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>machinelearning</category>
      <category>speechtotext</category>
    </item>
    <item>
      <title>Unity Tutorial: Simple Aseprite to Unity workflow using an Automated Scripted Importer</title>
      <dc:creator>Anton Mihaylov</dc:creator>
      <pubDate>Thu, 30 Jul 2020 18:41:33 +0000</pubDate>
      <link>https://forem.com/antonmihaylov/unity-tutorial-simple-aseprite-to-unity-workflow-using-an-automated-scripted-importer-23en</link>
      <guid>https://forem.com/antonmihaylov/unity-tutorial-simple-aseprite-to-unity-workflow-using-an-automated-scripted-importer-23en</guid>
      <description>&lt;p&gt;If you develop games with Unity and make pixel art for them with Aseprite you probably did the same annoying thing over and over again, just like me:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Create pixel art in Aseprite&lt;/li&gt;
&lt;li&gt;Export the image to a png/pngs&lt;/li&gt;
&lt;li&gt;Open up Unity to import them, set up animations or load them into sprites&lt;/li&gt;
&lt;li&gt;Realize you need to adjust pixel art images&lt;/li&gt;
&lt;li&gt;Change them up in Aseprite&lt;/li&gt;
&lt;li&gt;Export them again&lt;/li&gt;
&lt;li&gt;Open up Unity, but realize that you have to re-do the animation, because you added one more frame now, or see that the reference to the sprite has been broken up&lt;/li&gt;
&lt;li&gt;Fix that&lt;/li&gt;
&lt;li&gt;Repeat&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;But a lot of this can be cut down to this:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Create pixel art in Aseprite, inside the Assets folder&lt;/li&gt;
&lt;li&gt;Save the file and have Unity load sprites and animations automatically every time you change it&lt;/li&gt;
&lt;li&gt;..that’s it&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;And this all can happen with the magic of the new Unity’s Scripted Importers API. (Which is still experimental by the way, but it gets the job done)&lt;/p&gt;

&lt;p&gt;So, anyway, let’s see how setting up a project using a custom Scripted Importer for Aseprite looks like.&lt;/p&gt;

&lt;h2&gt;
  
  
  Creating the Aseprite file
&lt;/h2&gt;

&lt;p&gt;Our first job of course is to create the pixel art in Aseprite.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--y-iQtct---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://www.antomicgames.com/wp-content/uploads/2020/07/Screenshot-50-1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--y-iQtct---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://www.antomicgames.com/wp-content/uploads/2020/07/Screenshot-50-1.png" alt="" width="880" height="473"&gt;&lt;/a&gt;We have our green guy here, with three animations: Running, Jump Charging, and Jumping. Nothing special for now, let’s save that file to Unity and see what happens.&lt;/p&gt;

&lt;h2&gt;
  
  
  Importing the Aseprite file in Unity
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--cWtXKLc7--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://www.antomicgames.com/wp-content/uploads/2020/07/Screenshot-51.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--cWtXKLc7--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://www.antomicgames.com/wp-content/uploads/2020/07/Screenshot-51.png" alt="" width="880" height="475"&gt;&lt;/a&gt;The player asset you see highlighted there is the .ase file that we saved earlier. When expanded, you can see that all of our frames are automatically exported from Aseprite and imported into Unity. And, which is more, you have the animations on the top there, ready to be plugged in in your Animation controller. And all I did was save the Aseprite file to the Assets folder.&lt;/p&gt;

&lt;p&gt;Let’s add an animation to an object, so we can see it in action.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--UFqnXLJq--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_880/https://www.antomicgames.com/wp-content/uploads/2020/07/2020-07-30-20-23-24.gif" alt="" width="880" height="476"&gt;Automatically importing changes
&lt;/h2&gt;

&lt;p&gt;Let’s say that we now want to change up the pixel art, for example – let’s make it purple.&lt;/p&gt;

&lt;p&gt;We hop over to Aseprite and adjust the hue a bit and we have this:&lt;/p&gt;

&lt;p&gt;Now, if we go back to unity (I’m still in Play mode by the way), we see that it is automatically re-imported with the updated pixel art, without doing anything&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--eLMwkh-1--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_880/https://www.antomicgames.com/wp-content/uploads/2020/07/auto-change-previw.gif" alt="" width="880" height="489"&gt;Auto Importer for Aseprite
&lt;/h2&gt;

&lt;p&gt;I’ve bundled up the Scripted Importer that you see in action and I made an asset on the asset store, so that not only I can benefit from it.&lt;/p&gt;

&lt;p&gt;You can find it here:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://assetstore.unity.com/packages/tools/sprite-management/auto-importer-for-aseprite-lite-174613"&gt;Auto Importer for Aseprite LITE&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://assetstore.unity.com/packages/tools/sprite-management/auto-importer-for-aseprite-pro-174615"&gt;Auto Importer for Aseprite PRO&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;With the only difference being that the PRO version supports animations and the lite doesn’t.&lt;/p&gt;

&lt;p&gt;Hopefully they can be of use to you too!&lt;/p&gt;

</description>
      <category>indiegamedev</category>
      <category>untiy2d</category>
      <category>unity3d</category>
      <category>aseprite</category>
    </item>
  </channel>
</rss>
