<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Andrey Dudnik</title>
    <description>The latest articles on Forem by Andrey Dudnik (@ddkand).</description>
    <link>https://forem.com/ddkand</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/ddkand"/>
    <language>en</language>
    <item>
      <title>How rising Web3 infrastructure costs pushed me to build my own RPC aggregator</title>
      <dc:creator>Andrey Dudnik</dc:creator>
      <pubDate>Thu, 09 Apr 2026 09:39:13 +0000</pubDate>
      <link>https://forem.com/ddkand/how-rising-web3-infrastructure-costs-pushed-me-to-build-my-own-rpc-aggregator-gc7</link>
      <guid>https://forem.com/ddkand/how-rising-web3-infrastructure-costs-pushed-me-to-build-my-own-rpc-aggregator-gc7</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Figvauhm3xam8r5qv6if6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Figvauhm3xam8r5qv6if6.png" alt=" " width="800" height="533"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Hi, my name is Andrey, and I work with blockchain data analytics, mostly in EVM-like networks.&lt;/p&gt;

&lt;p&gt;In practice, that means working with things like token prices across exchanges, NFT flows, liquidity depth in DEX pools, lending-related activity, flashloan opportunities, and other on-chain signals that can be turned into useful analytics.&lt;/p&gt;

&lt;p&gt;What pulled me into this area was the nature of blockchain data itself.&lt;br&gt;&lt;br&gt;
Unlike traditional fintech systems, blockchain data is open and immutable. Once something is written to the chain, it is usually there forever. That creates a very unusual environment for analytics: you can track how funds move, how wallets behave, how contracts are used, how liquidity appears and disappears, and how entire systems evolve over time.&lt;/p&gt;

&lt;p&gt;But there is an important practical detail here.&lt;/p&gt;

&lt;p&gt;On-chain analytics does not start with charts or SQL. It starts with access to the network.&lt;/p&gt;

&lt;p&gt;To collect data, you need a stable way to talk to blockchains. Very quickly, this turns into an infrastructure problem: nodes, RPC, APIs, sync status, availability, rate limits, and cost.&lt;/p&gt;

&lt;p&gt;At a high level, analytics comes down to two things:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;knowing which contracts, events, and transactions to inspect;&lt;/li&gt;
&lt;li&gt;being able to extract that data reliably and at scale.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The first part is hard, but manageable.&lt;br&gt;&lt;br&gt;
The second part is where things get expensive and painful.&lt;/p&gt;

&lt;h2&gt;
  
  
  The old model stopped working
&lt;/h2&gt;

&lt;p&gt;The most direct solution is to run your own nodes and use their APIs.&lt;/p&gt;

&lt;p&gt;But that is no longer trivial.&lt;/p&gt;

&lt;p&gt;For example, running Ethereum execution + consensus infrastructure now requires a serious machine: fast NVMe storage, a lot of RAM, and a decent CPU. In practice, this can easily mean something like 4TB NVMe, 64GB RAM, and a modern processor. And even with the hardware ready, there is still sync time. Sometimes more than 72 hours.&lt;/p&gt;

&lt;p&gt;So the market naturally evolved a different offer:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Don’t run your own nodes. Use our RPC. Start on a free tier, then upgrade to paid plans when needed.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;A few years ago, this model worked surprisingly well.&lt;br&gt;&lt;br&gt;
Free plans were often enough to build an MVP, test ideas, and even run small products. And if things grew, paid plans or PAYG usually made sense.&lt;/p&gt;

&lt;p&gt;But over time, that changed.&lt;/p&gt;

&lt;p&gt;If I look back at roughly 2018–2020, that period felt like a very good time for on-chain analytics. Hardware was much cheaper, networks were smaller, and workloads were lighter. Then L2 adoption accelerated, transaction costs dropped, usage exploded, and the amount of chain data grew massively.&lt;/p&gt;

&lt;p&gt;As a result, the economics changed too.&lt;/p&gt;

&lt;p&gt;What used to be true:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;hardware was affordable;&lt;/li&gt;
&lt;li&gt;free RPC plans were often enough for early analytics workloads;&lt;/li&gt;
&lt;li&gt;self-hosting was still realistic for many developers.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;What changed:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;NVMe prices increased sharply;&lt;/li&gt;
&lt;li&gt;large SSDs became much more expensive;&lt;/li&gt;
&lt;li&gt;free RPC plans became heavily restricted;&lt;/li&gt;
&lt;li&gt;those free plans became almost useless for serious analytics workloads;&lt;/li&gt;
&lt;li&gt;paid plans started at a level that is hard to justify for solo builders and small teams;&lt;/li&gt;
&lt;li&gt;and once the workload grows, costs can climb very quickly.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;At that point, collecting data from Web3 networks — or building a small analytics startup on top of them — became much less accessible than it used to be.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why I started self-hosting anyway
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxhli4hrkmp9wtzcoyzf7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxhli4hrkmp9wtzcoyzf7.png" alt=" " width="800" height="533"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I hit this problem myself.&lt;/p&gt;

&lt;p&gt;I needed to make a large number of heavy requests across multiple networks. Free providers were no longer enough, and paying commercial rates for the amount of traffic I needed did not feel rational.&lt;/p&gt;

&lt;p&gt;That left very few options.&lt;/p&gt;

&lt;p&gt;The main one was obvious: run my own nodes.&lt;/p&gt;

&lt;p&gt;I did not want to do it. Not because of the hardware alone, but because of everything that comes with it: updating node software, monitoring sync state, dealing with lag, handling failures, and generally maintaining a small fleet of infrastructure.&lt;/p&gt;

&lt;p&gt;Still, I did it.&lt;/p&gt;

&lt;p&gt;I started with one node. Then another. Then more.&lt;/p&gt;

&lt;p&gt;And very quickly I ran into another problem: a node by itself is not a solution.&lt;/p&gt;

&lt;p&gt;Nodes need updates. During updates, they can fall behind the network. After updates, they need time to catch up. If production traffic is hitting a lagging node during that period, things stop behaving the way you want them to.&lt;/p&gt;

&lt;p&gt;That naturally led to the next step: if I wanted stable access, I needed at least two nodes in HA mode and something in front of them.&lt;/p&gt;

&lt;h2&gt;
  
  
  HAProxy worked — until it didn’t
&lt;/h2&gt;

&lt;p&gt;I chose HAProxy.&lt;/p&gt;

&lt;p&gt;It can route traffic, run health checks, and generally do what a load balancer is supposed to do. On paper, the setup looked fine: two nodes behind HAProxy, traffic balanced between them, one node can go down for updates while the other keeps serving requests.&lt;/p&gt;

&lt;p&gt;And yes, that setup worked.&lt;/p&gt;

&lt;p&gt;But over time, I realized that HAProxy was not really the right abstraction for the problem I was solving.&lt;/p&gt;

&lt;p&gt;What I actually needed was not just balancing. I needed routing logic that understood Web3-specific realities:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;whether a node was truly synced;&lt;/li&gt;
&lt;li&gt;whether it was still catching up;&lt;/li&gt;
&lt;li&gt;whether it should be used for archive-only requests;&lt;/li&gt;
&lt;li&gt;whether a consensus endpoint should be handled differently from execution traffic;&lt;/li&gt;
&lt;li&gt;whether one provider or node was more appropriate for a specific route or workload.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;HAProxy is powerful, but maintaining this kind of logic there became too uncomfortable and too brittle for my use case.&lt;/p&gt;

&lt;p&gt;That is when I decided to build a dedicated service for it.&lt;/p&gt;

&lt;h2&gt;
  
  
  Building my own RPC aggregator
&lt;/h2&gt;

&lt;p&gt;The idea was simple:&lt;/p&gt;

&lt;p&gt;build a layer that can aggregate different sources, route and balance requests between them, understand which nodes are healthy, which are behind, which are suitable only for certain request types, and which should not be used at all.&lt;/p&gt;

&lt;p&gt;In other words, not just a generic load balancer in front of nodes, but a service built specifically around the realities of Web3 infrastructure.&lt;/p&gt;

&lt;p&gt;That is how my RPC aggregator appeared.&lt;/p&gt;

&lt;p&gt;At this point it is still in alpha, but it already covers some of the pain points that pushed me to build it in the first place:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;aggregation of multiple providers and self-hosted nodes;&lt;/li&gt;
&lt;li&gt;separate routing for execution, archive, and consensus traffic;&lt;/li&gt;
&lt;li&gt;balancing and request routing;&lt;/li&gt;
&lt;li&gt;infrastructure checks that are more practical than using a generic reverse proxy alone.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Why I’m opening it up
&lt;/h2&gt;

&lt;p&gt;At some point I realized something else: this setup is already larger than what I personally need for my own workloads.&lt;/p&gt;

&lt;p&gt;So instead of keeping all of that capacity to myself, I decided to open it up in alpha.&lt;/p&gt;

&lt;p&gt;Right now I am not trying to sell it.&lt;br&gt;&lt;br&gt;
What I care about more is understanding whether this is actually useful to other people dealing with similar problems:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;on-chain analytics;&lt;/li&gt;
&lt;li&gt;bots;&lt;/li&gt;
&lt;li&gt;infra tooling;&lt;/li&gt;
&lt;li&gt;archive-heavy workloads;&lt;/li&gt;
&lt;li&gt;consensus or beacon access;&lt;/li&gt;
&lt;li&gt;high-volume RPC use cases that do not fit well into free-tier providers.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If that sounds relevant, I’m happy to share access in exchange for honest feedback.&lt;/p&gt;

&lt;p&gt;What I want to learn is simple:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;is this actually useful in practice?&lt;/li&gt;
&lt;li&gt;which networks or routes are missing?&lt;/li&gt;
&lt;li&gt;where are the weak points?&lt;/li&gt;
&lt;li&gt;what should be improved first?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If there is interest, I can also write a more technical follow-up about the architecture itself: how the routing layer works, why I moved away from a pure HAProxy-based setup, and what kinds of infrastructure problems start appearing once you build this for real analytics workloads.&lt;/p&gt;

&lt;p&gt;If this is relevant to your work, feel free to reach out.&lt;/p&gt;

&lt;p&gt;Telegram: &lt;code&gt;@ddkand&lt;/code&gt;&lt;/p&gt;

</description>
      <category>web3</category>
      <category>ethereum</category>
      <category>devops</category>
      <category>blockchain</category>
    </item>
    <item>
      <title>One of many ways to migrate from NodeJS to Rust</title>
      <dc:creator>Andrey Dudnik</dc:creator>
      <pubDate>Wed, 22 Nov 2023 17:11:32 +0000</pubDate>
      <link>https://forem.com/ddkand/one-of-many-ways-to-migrate-from-nodejs-to-rust-1h89</link>
      <guid>https://forem.com/ddkand/one-of-many-ways-to-migrate-from-nodejs-to-rust-1h89</guid>
      <description>&lt;p&gt;This post describes my personal approach and the experience I have gained. It may contain some deviations, but they are not critical to understanding and usage.&lt;/p&gt;

&lt;p&gt;My goal is to develop a microservice in Rust that matches the speed, safety, and ease of development found in NodeJS and Typescript with NestJS.&lt;br&gt;
Whats more important to me is the utilization of modern system design patterns such as IoC, DDD, CQRS, and Hexagonal architecture, without relying on complex frameworks.&lt;/p&gt;

&lt;p&gt;To start with, as I mentioned earlier, this post wont involve complex frameworks such as rocket_rs or actix_web, among others of the sort. This choice is deliberate because delving into such frameworks tends to require learning numerous components that may not significantly contribute to creating clear and comprehensible software. However, if you prefer to use frameworks, its entirely fine — just not the approach I am taking here.&lt;/p&gt;

&lt;p&gt;Lets begin preparing to create the Todo microservice in Rust. Initially, we need to decide on the crates that will be utilized in the microservice. For this purpose, I published my own implementations of IoC &amp;amp; CQRS. I chose this route to gain a deeper understanding of these methodologies, as some other crates were either outdated or lacked support from the community.&lt;/p&gt;

&lt;p&gt;Links of crates:&lt;br&gt;
&lt;a href="https://crates.io/crates/ioc_container_rs" rel="noopener noreferrer"&gt;IoC container&lt;/a&gt;&lt;br&gt;
&lt;a href="https://crates.io/crates/kti_cqrs_rs" rel="noopener noreferrer"&gt;CQRS implementation&lt;/a&gt;&lt;br&gt;
&lt;a href="https://crates.io/crates/kti_cqrs_provider_rs" rel="noopener noreferrer"&gt;CQRS implementation with IoC wrapper&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Other crates of microservice:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;tonic&lt;/strong&gt; - grpc implementation&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;hyper&lt;/strong&gt; - http implementation&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;sqlx&lt;/strong&gt; - driver for managing PostgreSQL&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;tokio&lt;/strong&gt; - handling asynchronous operations&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Thats all we will be using.&lt;/p&gt;

&lt;p&gt;I lean toward an approach where the application is divided into smaller slices, encapsulated within different packages for reuse. The cargo workspaces package structure will look like this.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;API&lt;/strong&gt; - main app to client access calls&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Common&lt;/strong&gt; - domain area with interfaces for core packages&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Config&lt;/strong&gt; - contain any environment vars to provide it in packages&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Core&lt;/strong&gt; - business logic, such as CRUD operations and etc&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Repository&lt;/strong&gt; - persisted store layer&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Schema GRPC&lt;/strong&gt; - proto files and network contracts to access MS&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Lets take a closer look each of them.&lt;/p&gt;

&lt;p&gt;API package contain grpc server to launch ours grpc controllers, by current example its will be only one controller with based CRUD operations,&lt;br&gt;
http server with health checks controller its will be used to any probes of kubernetes.&lt;/p&gt;

&lt;p&gt;Common package will contains only domain entities - actually its not so good place for them, but now its ok. &lt;br&gt;
And entirely interfaces to implementation of database repositories, business services and etc.&lt;/p&gt;

&lt;p&gt;Config package its pretty simple lib which provide store of environment variables. Like database credentials, app hosts and ports.&lt;/p&gt;

&lt;p&gt;Repository its more difficult layer which will working only with database, in current case its will be postgres.&lt;/p&gt;

&lt;p&gt;Schema GRPC package to accumulate proto files and generating bin files for communication with MS via API&lt;/p&gt;

&lt;p&gt;Diagram:&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4em8vphqc4zkczuy950m.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4em8vphqc4zkczuy950m.png" alt="Diagram of modules"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The previous section was more theoretical and now we will dive to code.&lt;/p&gt;

&lt;p&gt;We will start by creating a simple todo controller responsible for handling client requests.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;pub struct TodoGrpcController {
  context: ContainerContext,
}

impl TodoGrpcController {
  pub fn new(props: ContainerContextProps) -&amp;gt; Self {
    Self {
      context: ContainerContext::new(props),
    }
  }

  fn get_bus(&amp;amp;self) -&amp;gt; Box&amp;lt;CqrsProvider::Provider&amp;lt;AppContext&amp;gt;&amp;gt; {
    self.context.resolve_provider(CqrsProvider::TOKEN_PROVIDER)
  }
}

#[async_trait]
impl TodoService for TodoGrpcController {
  async fn create(
    &amp;amp;self,
    request: Request&amp;lt;CreateTodoRequest&amp;gt;,
  ) -&amp;gt; Result&amp;lt;Response&amp;lt;CreateTodoResponse&amp;gt;, Status&amp;gt; {
    let request = request.into_inner();

    let bus = self.get_bus();

    let command = CreateTodoCase::Command::new(&amp;amp;request.name, &amp;amp;request.description);

    let todo_entity = bus.command(Box::new(command)).await.unwrap();

    Ok(Response::new(CreateTodoResponse {
      status_code: 200,
      message: String::from("CREATED"),
      data: Some(Todo {
        id: todo_entity.get_id().to_string(),
        name: todo_entity.get_name().to_string(),
        description: todo_entity.get_description().to_string(),
        completed: todo_entity.get_completed(),
        created_at: todo_entity.get_created_at().to_string(),
        updated_at: todo_entity.get_updated_at().to_string(),
      }),
    }))
  }

  async fn get_by_id(
    &amp;amp;self,
    request: Request&amp;lt;GetTodoByIdRequest&amp;gt;,
  ) -&amp;gt; Result&amp;lt;Response&amp;lt;GetTodoByIdResponse&amp;gt;, Status&amp;gt; {
    let request = request.into_inner();

    let bus = self.get_bus();

    let query = GetTodoByIdCase::Query::new(&amp;amp;request.id);

    match bus.query(Box::new(query)).await.unwrap() {
      Some(r) =&amp;gt; Ok(Response::new(GetTodoByIdResponse {
        status_code: 200,
        message: String::from("SUCCESS"),
        data: Some(Todo {
          id: r.get_id().to_string(),
          name: r.get_name().to_string(),
          description: r.get_description().to_string(),
          completed: r.get_completed(),
          created_at: r.get_created_at().to_string(),
          updated_at: r.get_updated_at().to_string(),
        }),
      })),
      None =&amp;gt; return Err(Status::not_found("Not found todo by id.")),
    }
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The methods are pretty concise, each containing fundamental logic. For instance, lets delve into the creation of a todo item. Initially, I have introduced a method to retrieve the CQRS bus (get_bus), which serves as a pathway for routing commands or queries.&lt;/p&gt;

&lt;p&gt;Subsequently, a command is crafted and sent to the command bus using &lt;code&gt;bus.command&lt;/code&gt;. Following this, the tasks logic is delegated to the command handler. These handlers operate as layers responsible for executing various business cases, such as creating new todo items.&lt;/p&gt;

&lt;p&gt;Lets explore the code for the creation of a todo in the command handler:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#[derive(Clone)]
  pub struct Command {
    name: String,
    description: String,
  }

  impl Command {
    pub fn new(name: &amp;amp;str, description: &amp;amp;str) -&amp;gt; Self {
      Self {
        name: name.to_string(),
        description: description.to_string(),
      }
    }
  }

  #[async_trait]
  impl CommandHandler for Command {
    type Context = AppContext;
    type Output = Result&amp;lt;TodoEntity, Box&amp;lt;dyn Error&amp;gt;&amp;gt;;

    async fn execute(&amp;amp;self, context: Arc&amp;lt;Mutex&amp;lt;Self::Context&amp;gt;&amp;gt;) -&amp;gt; Self::Output {
      let repository = context.lock().unwrap().get_command().get_repository();

      repository
        .create(&amp;amp;TodoEntity::new(
          &amp;amp;Uuid::new_v4().to_string(),
          &amp;amp;self.name,
          &amp;amp;self.description,
          false,
          &amp;amp;Local::now().to_string(),
          &amp;amp;Local::now().to_string(),
        ))
        .await
    }
  }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The logic here is rather straightforward. It involves the creation of a new todo entity and handling the persistence by repository with the PostgreSQL database through sqlx.&lt;/p&gt;

&lt;p&gt;In the repository layers, its important to follow a key principle from CQRS, which suggests splitting read and write actions. This means having two separate database pools — one for writing data and the other for reading it. Yet, for simplicity in this example, I have chosen to create two simplified versions: one for executing commands and another for dealing with queries. You can adjust this setup to match your own needs.&lt;/p&gt;

&lt;p&gt;The todo query repository code:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#[derive(Clone)]
pub struct SqlxQueryRepository {
  pool: PgPool,
}

impl SqlxQueryRepository {
  pub fn new(pool: PgPool) -&amp;gt; Self {
    Self { pool }
  }
}

#[async_trait]
impl TodoQueryRepository for SqlxQueryRepository {
  async fn get_by_id(&amp;amp;self, id: &amp;amp;str) -&amp;gt; Result&amp;lt;Option&amp;lt;TodoEntity&amp;gt;, Box&amp;lt;dyn Error&amp;gt;&amp;gt; {
    let row = sqlx::query(
      "
      SELECT * FROM todo WHERE id = $1 LIMIT 1
    ",
    )
    .bind(Uuid::parse_str(id).unwrap())
    .fetch_one(&amp;amp;self.pool)
    .await;

    match row {
      Ok(r) =&amp;gt; Ok(Some(TodoSqlxMapper::pg_row_to_entity(&amp;amp;r))),
      Err(_) =&amp;gt; return Err("Cant find todo by id".into()),
    }
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And todo command repository code:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#[derive(Clone)]
pub struct SqlxCommandRepository {
  pool: PgPool,
}

impl SqlxCommandRepository {
  pub fn new(pool: PgPool) -&amp;gt; Self {
    Self { pool }
  }
}

#[async_trait]
impl TodoCommandRepository for SqlxCommandRepository {
  async fn create(&amp;amp;self, todo: &amp;amp;TodoEntity) -&amp;gt; Result&amp;lt;TodoEntity, Box&amp;lt;dyn Error&amp;gt;&amp;gt; {
    let created_at = match sqlx_parse_utils::string_to_timestamp(&amp;amp;todo.get_created_at()) {
      Ok(r) =&amp;gt; r,
      Err(e) =&amp;gt; return Err(e.into()),
    };

    let updated_at = match sqlx_parse_utils::string_to_timestamp(&amp;amp;todo.get_updated_at()) {
      Ok(r) =&amp;gt; r,
      Err(e) =&amp;gt; return Err(e.into()),
    };

    let row = sqlx::query(
      "
      INSERT INTO todo (id, name, description, completed, created_at, updated_at)
      VALUES ($1, $2, $3, $4, $5, $6) RETURNING *
    ",
    )
    .bind(Uuid::parse_str(&amp;amp;todo.get_id()).unwrap())
    .bind(&amp;amp;todo.get_name())
    .bind(&amp;amp;todo.get_description())
    .bind(&amp;amp;todo.get_completed())
    .bind(created_at)
    .bind(updated_at)
    .fetch_one(&amp;amp;self.pool)
    .await
    .unwrap();

    Ok(TodoSqlxMapper::pg_row_to_entity(&amp;amp;row))
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;I guess these repositories are should be quite clear enough as they implement base logic to manage sql data.&lt;/p&gt;

&lt;p&gt;We going to finish, and lets try to make some requests via grpc cli.&lt;br&gt;
In this example I will use the official grpc cli for macOS (brew install grpc), by default tonic supports the server reflection,&lt;br&gt;
so you can use this cli without any problems.&lt;/p&gt;

&lt;p&gt;Create the new todo:&lt;br&gt;
&lt;code&gt;$ grpc_cli call localhost:50051 Create "name: 'Read the book', description: 'I would like to read 10 pages'"&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;connecting to localhost:50051
Received initial metadata from server:
date : Sun, 12 Nov 2023 07:14:53 GMT
status_code: 200
message: "CREATED"
data {
  id: "112e6968-3424-4575-8bde-a16bcf64eeb6"
  name: "Read the book"
  description: "I would like to read 10 pages"
  created_at: "2023-11-12 14:14:53.945656"
  updated_at: "2023-11-12 14:14:53.945855"
}
Rpc succeeded with OK status
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Update the todo by id:&lt;br&gt;
&lt;code&gt;$ grpc_cli call localhost:50051 Update "id: '112e6968-3424-4575-8bde-a16bcf64eeb6', completed: true"&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;connecting to localhost:50051
Received initial metadata from server:
date : Sun, 12 Nov 2023 07:27:05 GMT
status_code: 200
message: "SUCCESS"
data {
  id: "112e6968-3424-4575-8bde-a16bcf64eeb6"
  name: "Read the book"
  description: "I would like to read 10 pages"
  completed: true
  created_at: "2023-11-12 14:14:53.945656"
  updated_at: "2023-11-12 14:26:48.589915"
}
Rpc succeeded with OK status
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Get paginated todo list:&lt;br&gt;
&lt;code&gt;$ grpc_cli call localhost:50051 GetPaginated "page: 0, limit: 10"&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;connecting to localhost:50051
Received initial metadata from server:
date : Sun, 12 Nov 2023 11:17:22 GMT
status_code: 200
message: "SUCCESS"
data {
  id: "c641207e-7c1f-40ec-9735-70b3944bb1b1"
  name: "Buy drinks"
  description: "Going to market and buy some drinks"
  created_at: "2023-11-12 14:18:04.789755"
  updated_at: "2023-11-12 14:18:04.789791"
}
data {
  id: "f65e81f8-1620-4c68-9b58-3936bd250b0f"
  name: "Check the mail"
  description: "Looking for new messages in gmail"
  created_at: "2023-11-12 14:15:43.271947"
  updated_at: "2023-11-12 14:15:43.272056"
}
data {
  id: "112e6968-3424-4575-8bde-a16bcf64eeb6"
  name: "Read the book"
  description: "I would like to read 10 pages"
  completed: true
  created_at: "2023-11-12 14:14:53.945656"
  updated_at: "2023-11-12 14:26:48.589915"
}
Rpc succeeded with OK status
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;So, thats all.&lt;br&gt;
I finish the minimal microservice in Rust without any complex frameworks and main goal about implementation of clear architecture was done.&lt;/p&gt;

&lt;p&gt;I leave the link to github where you can discover all repo.&lt;/p&gt;

&lt;p&gt;PS.&lt;br&gt;
My way can be looks like something not finished but if you would like to dive into Rust with NodeJS background, I think this post will help ypu.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/kotletti/kti-example-ms" rel="noopener noreferrer"&gt;Github link to repo&lt;/a&gt;&lt;/p&gt;

</description>
      <category>rust</category>
      <category>backend</category>
      <category>microservices</category>
      <category>cqrs</category>
    </item>
  </channel>
</rss>
