<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Thiago</title>
    <description>The latest articles on Forem by Thiago (@thiagosilvaf).</description>
    <link>https://forem.com/thiagosilvaf</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/thiagosilvaf"/>
    <language>en</language>
    <item>
      <title>GOCDC and Postgres</title>
      <dc:creator>Thiago</dc:creator>
      <pubDate>Tue, 18 Feb 2020 09:25:30 +0000</pubDate>
      <link>https://forem.com/thiagosilvaf/gocdc-and-postgres-1m4m</link>
      <guid>https://forem.com/thiagosilvaf/gocdc-and-postgres-1m4m</guid>
      <description>&lt;h2&gt;
  
  
  Change Data Capture (CDC) and Postgres
&lt;/h2&gt;

&lt;p&gt;Recently, I've written &lt;a href="https://dev.to/thiagosilvaf/how-to-use-change-database-capture-cdc-in-postgres-37b8"&gt;this&lt;/a&gt; post here, where I show you guys how to setup &lt;strong&gt;Change Data Capture (CDC)&lt;/strong&gt; in Postgres. So, if you don't know how to setup CDC in Postgres, I totally recommend you guys to take a quick look at that tutorial first. &lt;/p&gt;

&lt;h2&gt;
  
  
  GOCDC
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://github.com/ThiagoSilvaF/gocdc"&gt;GoCdc&lt;/a&gt; it's an Opensource API for data streaming, developed (I mean, in development) in Golang that I would like to share with you guys. In short, the concept behind it is similar to &lt;a href="https://github.com/debezium"&gt;Debezium&lt;/a&gt;, but in Golang, which in my opinion, it's easier for anyone to hack and adapt the application according to your needs. The focus of GoCdc is on simplicity. Simplicity to setup, to modify, etc. &lt;/p&gt;

&lt;h3&gt;
  
  
  Hands-on 🛠️
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Required:&lt;/strong&gt;&lt;br&gt;
Docker -&amp;gt; &lt;a href="https://www.docker.com/get-started"&gt;https://www.docker.com/get-started&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  1 - Creating the Project
&lt;/h3&gt;

&lt;p&gt;First of all, create a &lt;em&gt;docker-compose.yml&lt;/em&gt; file in the directory of your preference.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;version&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;3"&lt;/span&gt;
&lt;span class="na"&gt;services&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;gocdc&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;133thiago/gocdc:latest"&lt;/span&gt;
    &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;8000:8000"&lt;/span&gt;

  &lt;span class="na"&gt;db&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;postgres:11"&lt;/span&gt;
    &lt;span class="na"&gt;container_name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;my_postgres"&lt;/span&gt;
    &lt;span class="na"&gt;environment&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;POSTGRES_USER=postgres&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;POSTGRES_PASSWORD=postgres&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;POSTGRES_DB=example_db&lt;/span&gt;
    &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;5432:5432"&lt;/span&gt;
    &lt;span class="na"&gt;command&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;postgres"&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;-c"&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;wal_level=logical"&lt;/span&gt;
    &lt;span class="na"&gt;volumes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;my_dbdata:/var/lib/postgresql/data&lt;/span&gt;

  &lt;span class="na"&gt;zookeeper&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;wurstmeister/zookeeper&lt;/span&gt;
    &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;2181:2181"&lt;/span&gt;

  &lt;span class="na"&gt;kafka&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;wurstmeister/kafka&lt;/span&gt;
    &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;9092:9092"&lt;/span&gt;
    &lt;span class="na"&gt;environment&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;KAFKA_ADVERTISED_HOST_NAME&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;localhost&lt;/span&gt;
      &lt;span class="na"&gt;KAFKA_ZOOKEEPER_CONNECT&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;zookeeper:2181&lt;/span&gt;
    &lt;span class="na"&gt;volumes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;/var/run/docker.sock:/var/run/docker.sock&lt;/span&gt;

&lt;span class="na"&gt;volumes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;my_dbdata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;h3&gt;
  
  
  2 - Running in Docker
&lt;/h3&gt;

&lt;p&gt;Now you're going to run the &lt;em&gt;docker-compose up -d&lt;/em&gt;, and then run &lt;strong&gt;docker ps&lt;/strong&gt;. Then you should see the containers created.&lt;/p&gt;

&lt;h3&gt;
  
  
  3 - Postgres Setup
&lt;/h3&gt;

&lt;p&gt;As I said at the beginning of this post, this step is going to be this tutorial in here -&amp;gt; (How to use Change Data Capture (CDC) with Postgres)[&lt;a href="https://dev.to/thiagosilvaf/how-to-use-change-database-capture-cdc-in-postgres-37b8"&gt;https://dev.to/thiagosilvaf/how-to-use-change-database-capture-cdc-in-postgres-37b8&lt;/a&gt;].&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;IMPORTANT&lt;/strong&gt;: Just make sure you create either create your database as &lt;strong&gt;example_db&lt;/strong&gt;, as it is in the value of &lt;strong&gt;POSTGRES_DB&lt;/strong&gt; in our docker-compose.yml, or you change the value in our docker-compose.yml to the database that you created. It is up to you.&lt;/p&gt;

&lt;h3&gt;
  
  
  4 - Kafka config:
&lt;/h3&gt;

&lt;p&gt;Now, it's time to create our Kafka Topic, which is going to be the topic that our connector is going to send the database changes. &lt;br&gt;
First, run &lt;strong&gt;docker ps&lt;/strong&gt; again and get the &lt;em&gt;CONTAINER ID&lt;/em&gt; of your Kafka container, it will be something like this &lt;em&gt;4bed6164a8e9&lt;/em&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;CONTAINER ID        IMAGE                          COMMAND                  CREATED             STATUS              PORTS

4bed6164a8e9        wurstmeister/kafka             &lt;span class="s2"&gt;"start-kafka.sh"&lt;/span&gt;         5 hours ago         Up About an hour    0.0.0.0:9092-&amp;gt;9092/tcp
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Then, run the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;docker &lt;span class="nb"&gt;exec&lt;/span&gt; &lt;span class="nt"&gt;-it&lt;/span&gt; 4bed6164a8e9 &lt;span class="s2"&gt;"bash"&lt;/span&gt; &lt;span class="c"&gt;# Remember to replace the Container ID with yours&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Finally, let's create the topic, I'm going to call it &lt;em&gt;test&lt;/em&gt;, but it is up to you. just bear in mind, you will need the topic later!&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;bash-4.4# Kafka-topics.sh &lt;span class="nt"&gt;--create&lt;/span&gt; &lt;span class="nt"&gt;--zookeeper&lt;/span&gt; zookeeper:2181 &lt;span class="nt"&gt;--replication-factor&lt;/span&gt; 1 &lt;span class="nt"&gt;--partitions&lt;/span&gt; 1 &lt;span class="nt"&gt;--topic&lt;/span&gt; &lt;span class="nb"&gt;test&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;h3&gt;
  
  
  5 - Tinder match: Postgres ❤️ Kafka
&lt;/h3&gt;

&lt;p&gt;GOCDC provides a REST Interface for creating the connection between the Database and Kafka. If you look again to our docker-compose.yml file, GOCDC is using the Port 8000. So let's do a POST to our localhost:8000/connectors/Postgres&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;curl &lt;span class="nt"&gt;--location&lt;/span&gt; &lt;span class="nt"&gt;--request&lt;/span&gt; POST &lt;span class="s1"&gt;'http://localhost:8000/connectors/postgres'&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nt"&gt;--header&lt;/span&gt; &lt;span class="s1"&gt;'Content-Type: application/JSON'&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nt"&gt;--data-raw&lt;/span&gt; &lt;span class="s1"&gt;'{
    "connector_name":"Conn PG Test",
    "db_host": "localhost",
    "db_port": 5432,
    "db_user": "postgres",
    "db_pass": "postgres",
    "db_name": "example_db",
    "db_slot": "slot",
    "kafka_brokers": [ "localhost:9092" ],
    "kafka_topic": "test"
        "lookup_interval": 5000
}
'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Let's dive into the JSON object sent in this request:&lt;br&gt;
&lt;em&gt;connector_name&lt;/em&gt;: No big deal here, just an Identifier. It is unique though, so you can use it to Edit via &lt;strong&gt;PUT&lt;/strong&gt; request. &lt;/p&gt;

&lt;p&gt;&lt;em&gt;db_host&lt;/em&gt;: The IP where your Postgres is running. In our example, it is &lt;em&gt;localhost&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;db_port&lt;/em&gt;: The Port to access our Postgres database.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;db_user&lt;/em&gt; AND &lt;em&gt;db_pass&lt;/em&gt;: User and Password of our Postgres database.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;db_name&lt;/em&gt;: The name of the database&lt;/p&gt;

&lt;p&gt;&lt;em&gt;db_slot&lt;/em&gt;: The name of the &lt;em&gt;Replication Slot&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;kafka_brokers&lt;/em&gt;: An array with our Kafka brokers. Well, we're running only one in our example, but you can set multiple.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;kafka_topic&lt;/em&gt;: Here is the &lt;strong&gt;Topic&lt;/strong&gt; that we created in Step 4.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;lookup_interval&lt;/em&gt;: Here you tell the connector how often it should execute a CDC lookup. In our example, every 5 seconds (yes, the parameter value is in &lt;em&gt;milliseconds&lt;/em&gt;).&lt;/p&gt;
&lt;h3&gt;
  
  
  6 - Kafka Consumer:
&lt;/h3&gt;

&lt;p&gt;So, we now have our Postgres database up and running, our Kafka "cluster" also up and running and the Connector created! &lt;br&gt;
Assuming that you have the Database and a Table created in your Postgres (Step 3), Connect to your &lt;em&gt;Kafka&lt;/em&gt; once again (&lt;strong&gt;Step 4&lt;/strong&gt;) and run the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;bash-4.4# Kafka-console-consumer.sh &lt;span class="nt"&gt;--bootstrap-server&lt;/span&gt; localhost:9092 &lt;span class="nt"&gt;--topic&lt;/span&gt; &lt;span class="nb"&gt;test&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;And now, you will be able to see every &lt;em&gt;Insert, Update and Delete&lt;/em&gt; from your Database, being sent to your Kafka consumer.&lt;/p&gt;

&lt;p&gt;I hope you enjoyed the Tutorial, if you've got stuck at some step, please leave a comment and do my best to help! 🖖&lt;/p&gt;

</description>
      <category>showdev</category>
      <category>tutorial</category>
      <category>opensource</category>
      <category>go</category>
    </item>
    <item>
      <title>Computational Offloading in Mobile Cloud Computing </title>
      <dc:creator>Thiago</dc:creator>
      <pubDate>Mon, 17 Feb 2020 22:05:27 +0000</pubDate>
      <link>https://forem.com/thiagosilvaf/computational-offloading-in-mobile-cloud-computing-1d33</link>
      <guid>https://forem.com/thiagosilvaf/computational-offloading-in-mobile-cloud-computing-1d33</guid>
      <description>&lt;p&gt;Back in 2014, in &lt;a href="https://ieeexplore.ieee.org/document/6553297"&gt;this&lt;/a&gt; paper, we've got to see the definition of Mobile Cloud Computing (MCC). It says that MCC is the combination of Cloud Computing and Mobile Computing 🤯. The concept of Cloud Computing sounds like the most obvious solution to the hardware limitation of Mobile devices. I know what you're looking at your newest iPhone or Xiaomi and thinking that &lt;em&gt;limitation&lt;/em&gt; might not be the right word, in fact, Smartphones nowadays are supercomputers in our pockets, but think for a moment: &lt;/p&gt;

&lt;p&gt;What if the battery of your phone dies RIGHT NOW? It is a "hardware limitation". &lt;br&gt;
What if you are on a bus, just passing through a tunnel, how flaky is your mobile connection going to be? &lt;/p&gt;

&lt;p&gt;I could keep talking about "limitations" of our Smartphones but if you think, those two scenarios that I mentioned, would not be a problem for an old fashion Desktop for example, as it is always plugged in an electricity cable, always in a cable internet or even your house wifi, pretty good and stable. &lt;br&gt;
But mobile devices are not only Smartphones, right? It can be anything, such as your Smartwatch or your Nintendo Switch - That's why I always carry a Power Bank 16000mah with me 🧞‍♂️ -. So as you can see, MCC Computational Offloading is still a very relevant topic in 2020.  &lt;/p&gt;

&lt;h3&gt;
  
  
  Context-Awareness
&lt;/h3&gt;

&lt;p&gt;Context-Awareness is one of the most important topics in MCC. It has a significant impact on the computational offloading due to the uncertainty &lt;strong&gt;whether this would be beneficial or not&lt;/strong&gt;. Context Awareness in MCC can be classified into two approaches: &lt;em&gt;Context-Aware application partitioning&lt;/em&gt; and &lt;em&gt;Context-Aware computational offloading&lt;/em&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Context-Aware application partitioning&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;If you already know what are the &lt;em&gt;resource intensive&lt;/em&gt; features of your application and you will run it in the cloud, as simple as that. &lt;a href="https://www.researchgate.net/publication/287971934_mCloud_A_Context-Aware_Offloading_Framework_for_Heterogeneous_Mobile_Cloud"&gt;mCloud&lt;/a&gt; proposes an architecture for &lt;em&gt;dynamic&lt;/em&gt; offloading, this framework composed of several components for decision making and context-monitor, was developed in Java using reﬂection for selecting the essential piece of code to be offloaded to the cloud, it basically wraps the source code and sends to the cloud, where it will be compiled and executed. The advantage of mCloud is that you will be able to offload smaller "parts" of your application but as you can imagine, just by doing that, sounds like heavy computing itself. Also, this approach can not be the best decision in cases where the mobile application uses hardware features like sensors or GPS. &lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Context-Aware computation offloading&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;The Context-Aware methodology prioritises energy efficiency, application execution and performance. Here, you won't have specific functionality that will always be offloaded to the Cloud, the decision to offload or not, will be based in some aspects:&lt;/p&gt;

&lt;h3&gt;
  
  
  Objective Awareness
&lt;/h3&gt;

&lt;p&gt;The computational offloading has three principles that were already mentioned in this research: performance enhancement, energy efficiency and execution support. Usually, the application will need to focus on one of those principles according to the respective scenario. The mCloud architecture, for example, prioritises performance and energy efficiency.&lt;/p&gt;

&lt;h3&gt;
  
  
  Performance Awareness
&lt;/h3&gt;

&lt;p&gt;In the majority of the cases, the computational offloading focus on performance enhancement, but sometimes, the computational offloading takes more time to offload the task than running it locally, so in these cases, the offloading is not executed, thus not to compromise the performance of the application.&lt;/p&gt;

&lt;h3&gt;
  
  
  Energy Awareness
&lt;/h3&gt;

&lt;p&gt;Every time that a mobile device runs an intensive computational task locally, a significant amount of energy is consumed, it is a problem due to the energy limitation of the devices.&lt;br&gt;
Many authors use energy awareness as the main factor to decide when to offload or not the computational task to the cloud, however, to offload the task itself, the mobile also consumes a considerable amount of energy during the request and result of this process. The mCloud framework takes into account the consumption of energy during the decision-making phase; an algorithm creates an estimative of the amount of energy that the particular task will spend. This information affects the final decision to offload or not to the cloud.&lt;/p&gt;

&lt;h3&gt;
  
  
  Resource Awareness
&lt;/h3&gt;

&lt;p&gt;Resource Awareness requires the application, to be aware of the mobile device, it means that if the device does not have sufficient resources to run a computational process, plus the awareness of the cloud environment, so when the application has an intensive computational task to run. The cloud awareness verifies the conditions to offload; these two factors will be checked during the phase of the offloading decision-making.&lt;br&gt;
The aim of this research is performance enhancement as the major factor to decide&lt;br&gt;
if the offloading will be beneficial or not for each scenario. Also, context awareness is another important element during the decision-making phase. So it is possible to say that the experiment done for this research is a combination of performance awareness and resource awareness.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;This is a summary of a topic that I believe that it is still relevant, mainly if we look at the mobile games that became such a big market and so many other examples. &lt;/p&gt;

&lt;p&gt;Thanks for reading! 😁 &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;References:&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://ieeexplore.ieee.org/document/6553297"&gt;https://ieeexplore.ieee.org/document/6553297&lt;/a&gt;&lt;br&gt;
&lt;a href="https://ieeexplore.ieee.org/document/7158968"&gt;https://ieeexplore.ieee.org/document/7158968&lt;/a&gt;&lt;br&gt;
&lt;a href="https://www.researchgate.net/publication/287971934_mCloud_A_Context-Aware_Offloading_Framework_for_Heterogeneous_Mobile_Cloud"&gt;https://www.researchgate.net/publication/287971934_mCloud_A_Context-Aware_Offloading_Framework_for_Heterogeneous_Mobile_Cloud&lt;/a&gt;&lt;/p&gt;

</description>
      <category>computerscience</category>
      <category>android</category>
      <category>cloud</category>
      <category>mobile</category>
    </item>
    <item>
      <title>How to use Change Data Capture (CDC) with Postgres</title>
      <dc:creator>Thiago</dc:creator>
      <pubDate>Fri, 07 Feb 2020 13:09:19 +0000</pubDate>
      <link>https://forem.com/thiagosilvaf/how-to-use-change-database-capture-cdc-in-postgres-37b8</link>
      <guid>https://forem.com/thiagosilvaf/how-to-use-change-database-capture-cdc-in-postgres-37b8</guid>
      <description>&lt;h2&gt;
  
  
  &lt;strong&gt;The Problem&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;I'm always surprised by the number of people who never heard about CDC, seriously. Say, you need to capture every change in a specific table, like an Update, an Insert or a Delete, how do you do that? they say &lt;strong&gt;TRIGGERS!!!&lt;/strong&gt; &lt;/p&gt;

&lt;p&gt;&lt;a href="https://i.giphy.com/media/LpkBAUDg53FI8xLmg1/giphy.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://i.giphy.com/media/LpkBAUDg53FI8xLmg1/giphy.gif" alt="Alt text of image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Ok, let's be honest here, it is not completely wrong, it will do the job of &lt;em&gt;" capturing a change in a table"&lt;/em&gt;, but be aware that you're going to face some performance issues because using this method, because Triggers are Database Operations that will run before or after a &lt;em&gt;Data Manipulation Language&lt;/em&gt; (DML) actions, &lt;a href="https://www.essentialsql.com/what-is-a-database-trigger/"&gt;here&lt;/a&gt; you can read more about Triggers. &lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;The Solution - CDC&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Alright, but we want to talk about &lt;em&gt;Log Based CDC&lt;/em&gt;. Every DML action in a specific table will be saved in a Transactional log file, so we can take advantage of that. &lt;a href="https://www.hvr-software.com/blog/change-data-capture/"&gt;Here&lt;/a&gt; a very good article about Log Based CDC. &lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;CDC and Postgres - Hands-on&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Enough talking, I'm also going to show a quick Demo with Postgres. Get your Docker ready!!&lt;/p&gt;

&lt;p&gt;Here is my docker-compose.yml file, that's all that you'll need for this tutorial:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;version&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;3"&lt;/span&gt;
&lt;span class="na"&gt;services&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;db&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;postgres:11"&lt;/span&gt;
    &lt;span class="na"&gt;container_name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;my_postgres"&lt;/span&gt;
    &lt;span class="na"&gt;environment&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;POSTGRES_USER=postgres&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;POSTGRES_PASSWORD=postgres&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;POSTGRES_DB=shop_db&lt;/span&gt;
    &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;5432:5432"&lt;/span&gt;
    &lt;span class="na"&gt;command&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;postgres"&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;-c"&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;wal_level=logical"&lt;/span&gt;
    &lt;span class="na"&gt;volumes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;my_dbdata:/var/lib/postgresql/data&lt;/span&gt;
&lt;span class="na"&gt;volumes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;my_dbdata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Copy this file and run &lt;strong&gt;docker-compose up -d&lt;/strong&gt;&lt;br&gt;
Then run &lt;strong&gt;docker ps&lt;/strong&gt;, you should see something like that&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;0c28a37615f4        postgres:11              "docker-entrypoint.s…"   24 hours ago        Up 24 hours         0.0.0.0:5432-&amp;gt;5432/tcp                 my_postgres_
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Great! our Container us up and running, now let's connect to our Postgres database running in our docker. Just run &lt;strong&gt;docker exec -it my_postgres psql -U postgres postgres&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Now, run the following script&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="k"&gt;create&lt;/span&gt; &lt;span class="k"&gt;table&lt;/span&gt; &lt;span class="n"&gt;employees&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;id&lt;/span&gt; &lt;span class="nb"&gt;int&lt;/span&gt; &lt;span class="k"&gt;primary&lt;/span&gt; &lt;span class="k"&gt;key&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;name&lt;/span&gt; &lt;span class="nb"&gt;varchar&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;age&lt;/span&gt; &lt;span class="nb"&gt;int&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Important:&lt;/strong&gt; &lt;br&gt;
Make sure the &lt;strong&gt;wal_level&lt;/strong&gt; is set to &lt;em&gt;logical&lt;/em&gt; and the &lt;strong&gt;max_replication_slots&lt;/strong&gt; is set to at least &lt;em&gt;1&lt;/em&gt;. To set these values, you will need to be a &lt;em&gt;superuser&lt;/em&gt;. In our example, it's all good as wal_level was set in our docker-compose.yml, but just in case you're trying to do it in your own Postgres database.&lt;br&gt;
If you want to check the parameters, just run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="k"&gt;show&lt;/span&gt; &lt;span class="n"&gt;max_replication_slots&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;show&lt;/span&gt; &lt;span class="n"&gt;wal_level&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Alright, We're nearly there. Now, lets create the &lt;a href="https://www.postgresql.org/docs/9.4/catalog-pg-replication-slots.html"&gt;Slot&lt;/a&gt;, running the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="k"&gt;SELECT&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="k"&gt;FROM&lt;/span&gt; &lt;span class="n"&gt;pg_create_logical_replication_slot&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;'slot_test'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;'test_decoding'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Now, insert something in our &lt;em&gt;employee&lt;/em&gt; table that we created before and run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight sql"&gt;&lt;code&gt;
&lt;span class="k"&gt;Insert&lt;/span&gt; &lt;span class="k"&gt;into&lt;/span&gt; &lt;span class="n"&gt;employees&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;name&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;age&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;values&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;'Thiago'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;'99'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="k"&gt;SELECT&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="k"&gt;FROM&lt;/span&gt; &lt;span class="n"&gt;pg_logical_slot_peek_changes&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;'slot'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="k"&gt;NULL&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="k"&gt;NULL&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="c1"&gt;--and we should see something like that&lt;/span&gt;

&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="mi"&gt;16&lt;/span&gt;&lt;span class="n"&gt;CEE88&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="mi"&gt;582&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="k"&gt;BEGIN&lt;/span&gt; &lt;span class="mi"&gt;582&lt;/span&gt;
&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="mi"&gt;16&lt;/span&gt;&lt;span class="n"&gt;CEE88&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="mi"&gt;582&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="k"&gt;table&lt;/span&gt; &lt;span class="k"&gt;public&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;employees&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="k"&gt;INSERT&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;id&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nb"&gt;integer&lt;/span&gt;&lt;span class="p"&gt;]:&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt; &lt;span class="n"&gt;name&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nb"&gt;character&lt;/span&gt; &lt;span class="nb"&gt;varying&lt;/span&gt;&lt;span class="p"&gt;]:&lt;/span&gt;&lt;span class="s1"&gt;'Thiago'&lt;/span&gt; &lt;span class="n"&gt;age&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nb"&gt;integer&lt;/span&gt;&lt;span class="p"&gt;]:&lt;/span&gt;&lt;span class="mi"&gt;99&lt;/span&gt;
&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="mi"&gt;16&lt;/span&gt;&lt;span class="n"&gt;CEFA0&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="mi"&gt;582&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="k"&gt;COMMIT&lt;/span&gt; &lt;span class="mi"&gt;582&lt;/span&gt;

&lt;span class="c1"&gt;--Now lets run it again&lt;/span&gt;
&lt;span class="k"&gt;SELECT&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="k"&gt;FROM&lt;/span&gt; &lt;span class="n"&gt;pg_logical_slot_peek_changes&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;'slot'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="k"&gt;NULL&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="k"&gt;NULL&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="c1"&gt;--We should see the same result... duuhhh!&lt;/span&gt;

&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="mi"&gt;16&lt;/span&gt;&lt;span class="n"&gt;CEE88&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="mi"&gt;582&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="k"&gt;BEGIN&lt;/span&gt; &lt;span class="mi"&gt;582&lt;/span&gt;
&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="mi"&gt;16&lt;/span&gt;&lt;span class="n"&gt;CEE88&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="mi"&gt;582&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="k"&gt;table&lt;/span&gt; &lt;span class="k"&gt;public&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;employees&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="k"&gt;INSERT&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;id&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nb"&gt;integer&lt;/span&gt;&lt;span class="p"&gt;]:&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt; &lt;span class="n"&gt;name&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nb"&gt;character&lt;/span&gt; &lt;span class="nb"&gt;varying&lt;/span&gt;&lt;span class="p"&gt;]:&lt;/span&gt;&lt;span class="s1"&gt;'Thiago'&lt;/span&gt; &lt;span class="n"&gt;age&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nb"&gt;integer&lt;/span&gt;&lt;span class="p"&gt;]:&lt;/span&gt;&lt;span class="mi"&gt;99&lt;/span&gt;
&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="mi"&gt;16&lt;/span&gt;&lt;span class="n"&gt;CEFA0&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="mi"&gt;582&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="k"&gt;COMMIT&lt;/span&gt; &lt;span class="mi"&gt;582&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;h4&gt;
  
  
  pg_logical_slot_peek_changes vs pg_logical_slot_get_changes
&lt;/h4&gt;

&lt;p&gt;Not sure if you've noticed but we've run &lt;strong&gt;pg_logical_slot_get_changes&lt;/strong&gt; twice, the reason for that is that you can retrieve the log data also using &lt;strong&gt;pg_logical_slot_get_changes&lt;/strong&gt;. Lets take a look at the second one:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="k"&gt;SELECT&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="k"&gt;FROM&lt;/span&gt; &lt;span class="n"&gt;pg_logical_slot_get_changes&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;'slot'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="k"&gt;NULL&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="k"&gt;NULL&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="c1"&gt;-- all good, our information is still here&lt;/span&gt;
 &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="mi"&gt;16&lt;/span&gt;&lt;span class="n"&gt;CEE88&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="mi"&gt;582&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="k"&gt;BEGIN&lt;/span&gt; &lt;span class="mi"&gt;582&lt;/span&gt;
 &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="mi"&gt;16&lt;/span&gt;&lt;span class="n"&gt;CEE88&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="mi"&gt;582&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="k"&gt;table&lt;/span&gt; &lt;span class="k"&gt;public&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;employees&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="k"&gt;INSERT&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;id&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nb"&gt;integer&lt;/span&gt;&lt;span class="p"&gt;]:&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt; &lt;span class="n"&gt;name&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nb"&gt;character&lt;/span&gt; &lt;span class="nb"&gt;varying&lt;/span&gt;&lt;span class="p"&gt;]:&lt;/span&gt;&lt;span class="s1"&gt;'Thiago'&lt;/span&gt; &lt;span class="n"&gt;age&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nb"&gt;integer&lt;/span&gt;&lt;span class="p"&gt;]:&lt;/span&gt;&lt;span class="mi"&gt;99&lt;/span&gt;
 &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="mi"&gt;16&lt;/span&gt;&lt;span class="n"&gt;CEFA0&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="mi"&gt;582&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="k"&gt;COMMIT&lt;/span&gt; &lt;span class="mi"&gt;582&lt;/span&gt;

&lt;span class="c1"&gt;-- Alright, once again&lt;/span&gt;
&lt;span class="n"&gt;postgres&lt;/span&gt;&lt;span class="o"&gt;=#&lt;/span&gt; &lt;span class="k"&gt;SELECT&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="k"&gt;FROM&lt;/span&gt; &lt;span class="n"&gt;pg_logical_slot_get_changes&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;'slot'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="k"&gt;NULL&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="k"&gt;NULL&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt; &lt;span class="k"&gt;rows&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="c1"&gt;--Holy shhh#@&amp;amp;.. &lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Yeah, as you can see, &lt;strong&gt;pg_logical_slot_get_changes&lt;/strong&gt; consumes once the files and then it's gone. It's amazing if you want to keep retrieving the data from the last time since you've checked for changes. It is going to be very useful &lt;em&gt;&lt;/em&gt; for the next few posts, not saying much now. 😃&lt;/p&gt;

&lt;p&gt;Anyway, I hope it was helpful and thanks for your time! 👋&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;references:&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://www.hvr-software.com/blog/change-data-capture/"&gt;https://www.hvr-software.com/blog/change-data-capture/&lt;/a&gt;&lt;br&gt;
&lt;a href="https://docs.microsoft.com/en-us/sql/relational-databases/track-changes/about-change-data-capture-sql-server?view=sql-server-ver15"&gt;https://docs.microsoft.com/en-us/sql/relational-databases/track-changes/about-change-data-capture-sql-server?view=sql-server-ver15&lt;/a&gt;&lt;br&gt;
&lt;a href="https://www.postgresql.org/docs/9.4/catalog-pg-replication-slots.html"&gt;https://www.postgresql.org/docs/9.4/catalog-pg-replication-slots.html&lt;/a&gt;&lt;br&gt;
&lt;a href="https://www.essentialsql.com/what-is-a-database-trigger/"&gt;https://www.essentialsql.com/what-is-a-database-trigger/&lt;/a&gt;&lt;/p&gt;

</description>
      <category>database</category>
      <category>postgres</category>
      <category>sql</category>
      <category>tutorial</category>
    </item>
  </channel>
</rss>
