<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Artem</title>
    <description>The latest articles on Forem by Artem (@artemooon).</description>
    <link>https://forem.com/artemooon</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/artemooon"/>
    <language>en</language>
    <item>
      <title>I built a Self-Hosted Solana Payments Library for Django Framework</title>
      <dc:creator>Artem</dc:creator>
      <pubDate>Sun, 29 Mar 2026 14:38:30 +0000</pubDate>
      <link>https://forem.com/artemooon/stop-paying-crypto-gateway-fees-introducing-self-hosted-solana-payments-for-django-framework-1hm8</link>
      <guid>https://forem.com/artemooon/stop-paying-crypto-gateway-fees-introducing-self-hosted-solana-payments-for-django-framework-1hm8</guid>
      <description>&lt;p&gt;I have built a library for the &lt;a href="https://www.djangoproject.com/" rel="noopener noreferrer"&gt;Django&lt;/a&gt; framework that simplifies the process of accepting crypto payments for your e-commerce (and other) applications.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://pypi.org/project/django-solana-payments/" rel="noopener noreferrer"&gt;django-solana-payments&lt;/a&gt; is an open-source Django library for accepting self-hosted Solana payments with automated on-chain verification.&lt;/p&gt;


&lt;div class="ltag-github-readme-tag"&gt;
  &lt;div class="readme-overview"&gt;
    &lt;h2&gt;
      &lt;img src="https://assets.dev.to/assets/github-logo-5a155e1f9a670af7944dd5e12375bc76ed542ea80224905ecaf878b9157cdefc.svg" alt="GitHub logo"&gt;
      &lt;a href="https://github.com/Artemooon" rel="noopener noreferrer"&gt;
        Artemooon
      &lt;/a&gt; / &lt;a href="https://github.com/Artemooon/django-solana-payments" rel="noopener noreferrer"&gt;
        django-solana-payments
      &lt;/a&gt;
    &lt;/h2&gt;
    &lt;h3&gt;
      A plug-and-play Django library for accepting online payments via the Solana blockchain
    &lt;/h3&gt;
  &lt;/div&gt;
  &lt;div class="ltag-github-body"&gt;
    
&lt;div id="readme" class="md"&gt;
&lt;div class="markdown-heading"&gt;
&lt;h1 class="heading-element"&gt;Django Solana Payments&lt;/h1&gt;
&lt;/div&gt;
&lt;p&gt;&lt;a rel="noopener noreferrer nofollow" href="https://camo.githubusercontent.com/c81e9f720cdbe3d521a5bc58580ed0fed965374e731b264a3e02616e05ed53cf/68747470733a2f2f6170702e72656164746865646f63732e6f72672f70726f6a656374732f646a616e676f2d736f6c616e612d7061796d656e74732f62616467652f3f76657273696f6e3d6c6174657374"&gt;&lt;img src="https://camo.githubusercontent.com/c81e9f720cdbe3d521a5bc58580ed0fed965374e731b264a3e02616e05ed53cf/68747470733a2f2f6170702e72656164746865646f63732e6f72672f70726f6a656374732f646a616e676f2d736f6c616e612d7061796d656e74732f62616467652f3f76657273696f6e3d6c6174657374" alt="Documentation Status"&gt;&lt;/a&gt;
&lt;a href="https://coveralls.io/github/Artemooon/django-solana-payments?branch=main" rel="nofollow noopener noreferrer"&gt;&lt;img src="https://camo.githubusercontent.com/60e889a25c1389e9939d4686b87de88a909120a1313d3a1ce08b3bd8c1c040c1/68747470733a2f2f636f766572616c6c732e696f2f7265706f732f6769746875622f417274656d6f6f6f6e2f646a616e676f2d736f6c616e612d7061796d656e74732f62616467652e7376673f6272616e63683d6d61696e" alt="Coverage Status"&gt;&lt;/a&gt;
&lt;a href="https://badge.fury.io/py/django-solana-payments" rel="nofollow noopener noreferrer"&gt;&lt;img src="https://camo.githubusercontent.com/31ec23b309d0f159e0c2cc9fab3a78cc19ef17845d36339bce4e055c63c79c55/68747470733a2f2f62616467652e667572792e696f2f70792f646a616e676f2d736f6c616e612d7061796d656e74732e737667" alt="PyPI version"&gt;&lt;/a&gt;
&lt;a href="https://pypi.python.org/pypi/django-solana-payments" rel="nofollow noopener noreferrer"&gt;&lt;img src="https://camo.githubusercontent.com/50f7ef09eaaca71233a3656653544a00e3e40485a76024e0e7284dd9c41ae8a8/68747470733a2f2f696d672e736869656c64732e696f2f707970692f707976657273696f6e732f646a616e676f2d736f6c616e612d7061796d656e74732e737667" alt="Python versions"&gt;&lt;/a&gt;
&lt;a href="https://github.com/Artemooon/django-solana-payments/blob/main/LICENSE" rel="noopener noreferrer"&gt;&lt;img src="https://camo.githubusercontent.com/7013272bd27ece47364536a221edb554cd69683b68a46fc0ee96881174c4214c/68747470733a2f2f696d672e736869656c64732e696f2f62616467652f6c6963656e73652d4d49542d626c75652e737667" alt="License"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;A Django library for integrating Solana payments into your project. This library provides a flexible and customizable way to accept Solana payments with support for customizable models, an easy-to-use API, and management commands for processing online payments using the Solana blockchain.&lt;/p&gt;
&lt;div class="markdown-heading"&gt;
&lt;h2 class="heading-element"&gt;Key Features&lt;/h2&gt;
&lt;/div&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Transaction verification and automatic payment confirmation&lt;/strong&gt;: Monitors the Solana blockchain, verifies incoming transactions, and automatically confirms payments when the expected amount is received.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Multi-token support (SOL and SPL tokens)&lt;/strong&gt;: Configure a list of active payment tokens (for example, SOL and USDC) and the library will use them for pricing and verification flows.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Flexibility and customization&lt;/strong&gt;: Use your own custom models for payments and tokens to fit your project's needs. Add custom logic using signals or callabacks.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Ease of integration&lt;/strong&gt;: Provides ready-to-use endpoints that can be used in existing DRF applications, or ready-to-use methods for Django applications that are not part…&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;
  &lt;/div&gt;
  &lt;div class="gh-btn-container"&gt;&lt;a class="gh-btn" href="https://github.com/Artemooon/django-solana-payments" rel="noopener noreferrer"&gt;View on GitHub&lt;/a&gt;&lt;/div&gt;
&lt;/div&gt;


&lt;h2&gt;
  
  
  Table of Contents
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;Main Benefits&lt;/li&gt;
&lt;li&gt;Package Features&lt;/li&gt;
&lt;li&gt;Quick Start&lt;/li&gt;
&lt;li&gt;How It Works and Tech Stack&lt;/li&gt;
&lt;li&gt;Development Process and Takeaways&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;It was inspired by libraries such as &lt;a href="https://github.com/jazzband/django-payments" rel="noopener noreferrer"&gt;django-payments&lt;/a&gt;, but designed specifically for Solana-based payments and for teams that want a self-hosted, flexible integration.&lt;/p&gt;

&lt;h2&gt;
  
  
  Main Benefits
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;No platform fees: &lt;a href="https://django-solana-payments.readthedocs.io/en/latest/#fees-and-chain-costs" rel="noopener noreferrer"&gt;You pay only standard blockchain transaction fees&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;Fully self-hosted: Everything runs on your own infrastructure.&lt;/li&gt;
&lt;li&gt;No third-party dependency: No reliance on external payment gateways or complex webhook management.&lt;/li&gt;
&lt;li&gt;Easy integration: &lt;a href="https://django-solana-payments.readthedocs.io/en/latest/drf_api_usage.html#integration-flow" rel="noopener noreferrer"&gt;Get started in just three steps&lt;/a&gt; with Django and DRF.&lt;/li&gt;
&lt;li&gt;Token Support: Supports native SOL and any &lt;a href="https://django-solana-payments.readthedocs.io/en/latest/payment_tokens.html" rel="noopener noreferrer"&gt;Solana SPL token&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;Extensible: Customize &lt;a href="https://django-solana-payments.readthedocs.io/en/latest/custom_models.html" rel="noopener noreferrer"&gt;models&lt;/a&gt; and react to events with &lt;a href="https://django-solana-payments.readthedocs.io/en/latest/payment_hooks.html" rel="noopener noreferrer"&gt;Django signals&lt;/a&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Package Features
&lt;/h2&gt;

&lt;p&gt;This package is aimed at developers and companies who want a robust, ready-to-use solution for safely accepting Solana payments inside their own Django infrastructure.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Ready-to-use DRF endpoints: For payment initiation, transfer verification, and payment details.&lt;/li&gt;
&lt;li&gt;Customizable models: Integrate with your existing domain models easily.&lt;/li&gt;
&lt;li&gt;Signals (Hooks/Callbacks): For processing post-payment business logic.&lt;/li&gt;
&lt;li&gt;Management commands: For reconciliation, fund consolidation, and reclaiming rent from token accounts.&lt;/li&gt;
&lt;li&gt;Security: Built-in encryption for one-time wallet private keys.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Quick Start
&lt;/h2&gt;

&lt;p&gt;To get started, install the package via pip:&lt;/p&gt;


&lt;div class="crayons-card c-embed"&gt;

  
&lt;h3&gt;
  
  
  Get Started
&lt;/h3&gt;

&lt;p&gt;&lt;code&gt;pip install django-solana-payments&lt;/code&gt;&lt;br&gt;

&lt;/p&gt;
&lt;/div&gt;


&lt;p&gt;Then, add it to your INSTALLED_APPS and configure your Solana RPC node in your settings.&lt;/p&gt;

&lt;p&gt;See more in the &lt;a href="https://github.com/Artemooon/django-solana-payments?tab=readme-ov-file#django-solana-payments" rel="noopener noreferrer"&gt;README&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  How It Works and Tech Stack
&lt;/h2&gt;

&lt;p&gt;The library is built with Python and Django and is packaged as a reusable Django app.&lt;/p&gt;

&lt;p&gt;Main technologies used:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Django for the application framework, ORM integration, admin support, management commands, and extension points such as signals&lt;/li&gt;
&lt;li&gt;Django REST Framework as an optional extra for ready-to-use API endpoints&lt;/li&gt;
&lt;li&gt;solana and solders for blockchain interaction, transaction parsing, wallet/keypair handling, and signature/status checks&lt;/li&gt;
&lt;li&gt;httpx for RPC communication&lt;/li&gt;
&lt;li&gt;stamina for more resilient retry behavior around network-dependent operations&lt;/li&gt;
&lt;li&gt;cryptography for encrypting one-time wallet private keys before storing them&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  How It Works Internally
&lt;/h3&gt;

&lt;p&gt;The core idea is simple: each payment gets its own one-time wallet.&lt;/p&gt;

&lt;p&gt;The flow looks like this:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;create a dedicated one-time wallet for a payment&lt;/li&gt;
&lt;li&gt;calculate the expected amount for the selected token&lt;/li&gt;
&lt;li&gt;watch blockchain activity for that wallet&lt;/li&gt;
&lt;li&gt;verify that the received amount matches the expected payment&lt;/li&gt;
&lt;li&gt;check the transaction confirmation level&lt;/li&gt;
&lt;li&gt;update the payment status in Django&lt;/li&gt;
&lt;li&gt;optionally move funds from the one-time wallet to the main wallet&lt;/li&gt;
&lt;li&gt;emit Django signals so the host application can react&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;After a payment is processed, it emits Django signals so you can run your own business logic (like marking an order as paid) directly inside your application.&lt;/p&gt;

&lt;h2&gt;
  
  
  Development Process and Takeaways
&lt;/h2&gt;

&lt;p&gt;This is my first open source library, and shipping it was a very different experience from building internal product features.&lt;/p&gt;

&lt;p&gt;The hardest part was not writing the code itself, but turning a specific implementation into a reusable package that other developers can install, understand, and trust. That meant spending much more time on API boundaries, documentation, examples, contributor experience, test coverage, release flow, and long-term maintainability.&lt;/p&gt;

&lt;p&gt;It took real effort to get this project into a shape that feels production-ready. There were many details to think through: payment lifecycle design, one-time wallet handling, blockchain verification flow, extension points with signals and callbacks, safe defaults, and a developer experience that still feels familiar to Django users.&lt;/p&gt;

&lt;p&gt;Building it pushed me to apply a more disciplined open source mindset around packaging, documentation quality, compatibility, and public interfaces. That process was challenging, but very valuable.&lt;/p&gt;

&lt;p&gt;If you try the library, I would genuinely appreciate your feedback. If something is unclear, missing, or could be improved, I would like to hear it.&lt;/p&gt;

&lt;p&gt;And if you want to support the project, please give it a star on GitHub. It helps more people discover the library. Thanks!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/Artemooon/django-solana-payments" class="crayons-btn crayons-btn--primary" rel="noopener noreferrer"&gt;View on GitHub and Start Building&lt;/a&gt;
&lt;/p&gt;

</description>
      <category>showdev</category>
      <category>django</category>
      <category>python</category>
      <category>blockchain</category>
    </item>
    <item>
      <title>AI Wrote It. Your Database Paid for It: How get_object() in DRF Actions Quietly Kills Backend Performance</title>
      <dc:creator>Artem</dc:creator>
      <pubDate>Fri, 27 Feb 2026 20:12:22 +0000</pubDate>
      <link>https://forem.com/artemooon/ai-wrote-it-your-database-paid-for-it-how-getobject-in-drf-actions-quietly-kills-backend-jn9</link>
      <guid>https://forem.com/artemooon/ai-wrote-it-your-database-paid-for-it-how-getobject-in-drf-actions-quietly-kills-backend-jn9</guid>
      <description>&lt;p&gt;A pattern that looks clean, “idiomatic,” and AI-approved:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  @action(detail=True, methods=["put"])
  def close_ticket(self, request, pk=None):
      ticket = self.get_object()
      workflow_service.close_ticket(ticket=ticket, actor=request.user)
      return Response(TicketDetailSerializer(ticket).data)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It works. It ships. Everyone is happy.&lt;/p&gt;

&lt;p&gt;Until you profile it.&lt;/p&gt;

&lt;p&gt;The trap — the black horse hiding in plain sight — is self.get_object().&lt;/p&gt;

&lt;p&gt;In this case &lt;code&gt;self.get_object()&lt;/code&gt; does not fetch “a ticket.” It evaluates the entire &lt;code&gt;get_queryset()&lt;/code&gt; for that view.&lt;/p&gt;

&lt;p&gt;If &lt;code&gt;get_queryset()&lt;/code&gt; is built for rich list/retrieve responses, this action may trigger heavy joins/prefetches for data it never uses.&lt;/p&gt;

&lt;p&gt;Let's look at this code:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;class TicketViewSet(ModelViewSet):
    serializer_class = TicketSerializer

    def get_queryset(self):
        qs = (
            Ticket.objects
            .select_related("requester", "assignee", "project", "project__customer")
            .prefetch_related(
                "labels",
                "watchers",
                "attachments",
                Prefetch("comments", queryset=Comment.objects.select_related("author")),
                Prefetch("events", queryset=TicketEvent.objects.select_related("actor")),
            )
            .annotate(
                comments_count=Count("comments", distinct=True),
                watchers_count=Count("watchers", distinct=True),
                last_activity=Max("events__created_at"),
            )
        )

        # lots of list filters, permissions, tenant scoping, etc.
        return qs
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;One status update endpoint can suddenly act like a mini-report query.&lt;/p&gt;

&lt;p&gt;This is exactly the kind of thing AI-generated backend code often gets wrong: correct behavior, terrible query shape.&lt;/p&gt;

&lt;h2&gt;
  
  
  How AI Quietly Wrecks DB Performance
&lt;/h2&gt;

&lt;p&gt;AI tools tend to optimize for:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;familiar framework patterns&lt;/li&gt;
&lt;li&gt;fewer lines&lt;/li&gt;
&lt;li&gt;“it works” correctness&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;They do not reliably optimize for:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;action-specific query plans&lt;/li&gt;
&lt;li&gt;serializer/query coupling&lt;/li&gt;
&lt;li&gt;P95 latency under load&lt;/li&gt;
&lt;li&gt;DB memory/IO overhead&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;So you get “smart-looking” code with hidden query bloat.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Core Design Trap
&lt;/h2&gt;

&lt;p&gt;One global &lt;code&gt;get_queryset()&lt;/code&gt; for all actions:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;list needs broad graph&lt;/li&gt;
&lt;li&gt;retrieve needs nested graph&lt;/li&gt;
&lt;li&gt;close/reopen need maybe 5 columns total&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Then &lt;code&gt;@action&lt;/code&gt; calls &lt;code&gt;self.get_object()&lt;/code&gt;, inheriting the same heavy graph.&lt;/p&gt;

&lt;p&gt;Result: a tiny mutation endpoint executes a heavyweight read path.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why This Is Bad
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Over-fetching by default&lt;/strong&gt;&lt;br&gt;
     Loading comments, subscribers, nested relations for a status flip is waste.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Invisible cost&lt;/strong&gt;&lt;br&gt;
     &lt;code&gt;get_object()&lt;/code&gt; reads like O(1) logic; actual SQL can be huge.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Wrong serializer for the job&lt;/strong&gt;&lt;br&gt;
     Returning full serializer after a transition forces extra DB reads and serialization time.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Performance regression at scale&lt;/strong&gt;&lt;br&gt;
     These endpoints are often frequent. Query bloat multiplies quickly.&lt;/p&gt;
&lt;h2&gt;
  
  
  Better Architecture: Explicit Query Methods
&lt;/h2&gt;

&lt;p&gt;Create dedicated query methods for each use case (repository/query-service).&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  class HelpdeskTicketRepository:
      def list_queryset(self, user, filters):
          return (
              HelpdeskTicket.objects
              .filter(...)
              .select_related("reporter", "assignee", "department")
              .prefetch_related("subscribers", "comments__author")
          )

      def transition_queryset(self, user):
          return (
              HelpdeskTicket.objects
              .filter(...)
              .only("id", "status", "closed_by_id", "closed_at", "updated_at")
          )

  class HelpdeskTicketViewSet(ModelViewSet):
      def get_queryset(self):
          if self.action in {"close_ticket", "reopen_ticket"}:
              return repo.transition_queryset(self.request.user)
          return repo.list_queryset(self.request.user, self.request.query_params)

      @action(detail=True, methods=["put"])
      def close_ticket(self, request, pk=None):
          ticket = self.get_object()  # now lean query
          workflow_service.close_ticket(ticket, request.user)
          return Response(TicketStatusSerializer(ticket).data)  # lean response
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Cons of the “Just Use get_object() in @action” Habit
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Couples action performance to unrelated list/retrieve serializer needs&lt;/li&gt;
&lt;li&gt;Encourages accidental N+1 or unnecessary prefetches&lt;/li&gt;
&lt;li&gt;Makes optimization reactive instead of intentional&lt;/li&gt;
&lt;li&gt;Hides query intent from reviewers&lt;/li&gt;
&lt;li&gt;Gives a false sense of simplicity while increasing infrastructure cost&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  AI + Backend Rule You Should Adopt
&lt;/h2&gt;

&lt;p&gt;Use AI to speed up scaffolding.&lt;br&gt;
  Never let AI decide your query strategy without review.&lt;/p&gt;

&lt;p&gt;Before optimizing &lt;code&gt;@action&lt;/code&gt;, ask a more uncomfortable question:&lt;/p&gt;

&lt;p&gt;Should this be an &lt;code&gt;@action&lt;/code&gt; at all?&lt;/p&gt;

&lt;p&gt;In many codebases, &lt;code&gt;@action&lt;/code&gt; becomes a dumping ground for “extra endpoints.” It feels convenient. It keeps everything in one place. It looks tidy.&lt;/p&gt;

&lt;p&gt;But structurally, it introduces long-term cost.&lt;/p&gt;

&lt;p&gt;Why avoiding &lt;code&gt;@action&lt;/code&gt; often leads to better architecture&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;URL surface becomes implicit and harder to navigate&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Routes are derived from method names.&lt;br&gt;
There is no explicit URL declaration.&lt;br&gt;
Finding endpoints requires scanning class methods.&lt;/p&gt;

&lt;p&gt;That does not scale.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;ViewSets grow uncontrollably&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;A resource ViewSet starts clean: &lt;code&gt;list, retrieve, create, update, destroy&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Then comes: &lt;code&gt;close_ticket, reopen_ticket, assign_ticket, approve_ticket, archive_ticket&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Now your “resource controller” is a workflow engine.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;It violates Single Responsibility Principle&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;A TicketViewSet should manage CRUD semantics of a Ticket resource.&lt;/p&gt;

&lt;p&gt;Workflow transitions are not CRUD.&lt;br&gt;
They are domain commands.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Query shape leakage&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;As discussed earlier, &lt;code&gt;@action&lt;/code&gt; inherits get_queryset() unless you explicitly override behavior.&lt;/p&gt;

&lt;p&gt;That means workflow endpoints silently inherit list/retrieve query weight.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;A Cleaner Alternative&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Instead of:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;@action(detail=True, methods=["put"])
def close_ticket(self, request, pk=None):
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Prefer:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;class CloseTicketView(APIView):
    def put(self, request, pk):
        ticket = Ticket.objects.only("id", "status").get(pk=pk)
        workflow_service.close_ticket(ticket=ticket, actor=request.user)
        return Response(...)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now:&lt;/p&gt;

&lt;p&gt;URL is explicit&lt;/p&gt;

&lt;p&gt;Query is explicit&lt;/p&gt;

&lt;p&gt;Responsibility is isolated&lt;/p&gt;

&lt;p&gt;Class size stays small&lt;/p&gt;

&lt;p&gt;Performance behavior is predictable&lt;/p&gt;

&lt;p&gt;One class. One action. One purpose.&lt;/p&gt;

&lt;h2&gt;
  
  
  Final Takeaway
&lt;/h2&gt;

&lt;p&gt;AI won’t save your database from lazy query design.&lt;br&gt;
  If your action updates one status and fetches half your model graph, you’ve already lost.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;get_object()&lt;/code&gt; is not bad. Blind &lt;code&gt;get_object()&lt;/code&gt; on heavyweight querysets is.&lt;/p&gt;

&lt;p&gt;Design query paths per endpoint. Your DB, latency charts, and cloud bill will all improve.&lt;/p&gt;

&lt;p&gt;If you’ve run into this pattern in production, I’d genuinely like to hear about it.&lt;/p&gt;

&lt;p&gt;Have you discovered hidden query explosions behind “clean” DRF code?&lt;/p&gt;

&lt;p&gt;Share your experience in the comments.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>database</category>
      <category>backend</category>
      <category>django</category>
    </item>
    <item>
      <title>Boys on February 14: Babe, these flowers are for you, let's go to that fancy restaurant.

Men on February 14: Okay, I fixed production for the fifth time today, lemme push a few commits to the GitHub as well.</title>
      <dc:creator>Artem</dc:creator>
      <pubDate>Sat, 14 Feb 2026 22:28:06 +0000</pubDate>
      <link>https://forem.com/artemooon/boys-on-february-14-babe-these-flowers-are-for-you-lets-go-to-that-fancy-restaurant-men-on-39j7</link>
      <guid>https://forem.com/artemooon/boys-on-february-14-babe-these-flowers-are-for-you-lets-go-to-that-fancy-restaurant-men-on-39j7</guid>
      <description></description>
      <category>devops</category>
      <category>github</category>
      <category>programming</category>
      <category>watercooler</category>
    </item>
    <item>
      <title>How developers can cope with pressure while remaining calm and professional</title>
      <dc:creator>Artem</dc:creator>
      <pubDate>Fri, 14 Nov 2025 18:43:17 +0000</pubDate>
      <link>https://forem.com/artemooon/how-developers-can-cope-with-pressure-while-remaining-calm-and-professional-52n8</link>
      <guid>https://forem.com/artemooon/how-developers-can-cope-with-pressure-while-remaining-calm-and-professional-52n8</guid>
      <description>&lt;p&gt;Let's start with some very real and common situations.&lt;/p&gt;

&lt;p&gt;It's 2 AM and you're trying to fix production after another quick release.&lt;/p&gt;

&lt;p&gt;Or the customer is rushing you to complete the task, writing every 20 minutes and asking about progress, which prevents you from focusing on the task.&lt;/p&gt;

&lt;p&gt;Or the project manager once again reprimands you in a raised voice about incorrectly tracked time or exaggerated estimates in tickets.&lt;/p&gt;

&lt;p&gt;All of these are forms of pressure on a person and a test of their stress resistance.&lt;/p&gt;

&lt;p&gt;Some people do it consciously, while others do not.&lt;/p&gt;

&lt;p&gt;In any case, this results in stress for the developer, unnecessary anxiety, or even burnout.&lt;/p&gt;

&lt;p&gt;I have been through this dozens of times myself, and in this article I will try to explain how I minimize the damage from this kind of communication.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;First and foremost, remain calm and professional.&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Don't respond to pressure with aggression or discontent; this will lead to even greater unpleasantness and will certainly not spare your nerves. Instead, try to look at the situation through the eyes of the other person and don't take anything personally. A good solution would be to agree with the person and say what they want to hear. Do what they want if this is okay for you. This will save you energy and time.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Analyze their behaviour&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Try to analyze the situation when you calm down. Maybe this argument was a one-off and won't happen again. Conversely, notice if they use pressure as a means of controlling their employees. Some managers and companies cannot manage their employees except through fear. It also often happens that everything works itself out on its own.&lt;/p&gt;

&lt;p&gt;But try not to make too many assumptions. Ask questions and communicate clearly when it's possible to avoid misunderstandings.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Different types of thinking&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Don't forget about the significant differences between developers and managers. If your manager is a hard-working developer who doesn't criticize your code for every little thing, you're very lucky, but more often than not, management roles are more stringent in nature.&lt;/p&gt;

&lt;p&gt;Keep this in mind when you go to a personal meeting with non-technical people, and try to put work out of your mind and prepare for a bunch of personal questions and answers to them. This can protect you from unexpected situations.&lt;/p&gt;

&lt;p&gt;The manager's job is to supervise you. Your job is to develop the product. You have the right not to waste time on empty chatter, to stay away from meetings, and to benefit the company with your brainpower.&lt;/p&gt;

&lt;p&gt;Set boundaries, turn off notifications, and cancel meetings when possible. This will help you stay focused and avoid arguments. You can always say you were busy.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Don't hold anything bad in your mind&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Distract yourself with something you enjoy, listen to music, do something neutral that will calm you down and take your mind off work.&lt;/p&gt;

&lt;p&gt;Most often, you have to accept people as they are and move on.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Take everything as an experience&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Usually, after a dozen different conflict situations, you gain experience and understand how to respond to them. You can also always talk about some of the situations during an interview and how you resolved them peacefully.&lt;/p&gt;

&lt;p&gt;That's it for today. Stay calm and enjoy programming.&lt;/p&gt;

</description>
      <category>career</category>
      <category>watercooler</category>
      <category>writing</category>
    </item>
    <item>
      <title>FastAPI &amp; PostgreSQL Sharding: A Step-by-Step Guide (Part 2) - Step-by-Step Implementation</title>
      <dc:creator>Artem</dc:creator>
      <pubDate>Sat, 18 Oct 2025 15:17:22 +0000</pubDate>
      <link>https://forem.com/artemooon/fastapi-postgresql-sharding-a-step-by-step-guide-part-2-step-by-step-implementation-49k6</link>
      <guid>https://forem.com/artemooon/fastapi-postgresql-sharding-a-step-by-step-guide-part-2-step-by-step-implementation-49k6</guid>
      <description>&lt;p&gt;This is a continuation of my series of articles about horizontal scaling of databases.&lt;/p&gt;

&lt;p&gt;In the &lt;a href="https://dev.to/artemooon/fastapi-postgresql-sharding-a-step-by-step-guide-part-1-theory-45l1"&gt;first part&lt;/a&gt;, we discussed these topics in theory, including consistent hashing, the pitfalls of traditional hashing, and the challenges that sharding introduces at the application layer. Check it out if you haven’t already before moving forward.&lt;/p&gt;

&lt;p&gt;In this section, we will focus on the practical implementation of PostgreSQL sharding with a FastAPI backend application.&lt;/p&gt;

&lt;p&gt;As a demonstration, we'll build a link shortener app to avoid distractions from the business logic and focus more on the infrastructure and concepts of distributed database systems.&lt;/p&gt;

&lt;p&gt;This is the repository with the complete code - &lt;a href="https://github.com/Artemooon/postgres-shards" rel="noopener noreferrer"&gt;https://github.com/Artemooon/postgres-shards&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Let's start with the infrastructure setup
&lt;/h2&gt;

&lt;p&gt;First, we need to start a local cluster with Postgres instances. For this, we will use Docker.&lt;/p&gt;

&lt;p&gt;Let's create a docker template for our single postgres instance&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;FROM postgres

COPY init_tables.sql /docker-entrypoint-initdb.d
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here we use the latest PostgreSQL image and also copy our SQL script to the &lt;code&gt;/docker-entrypoint-initdb.d&lt;/code&gt; directory.&lt;/p&gt;

&lt;p&gt;When the container runs for the first time, Postgres automatically runs any .sql files found in /docker-entrypoint-initdb.d.&lt;/p&gt;

&lt;p&gt;So when we start our shards this script creates needed tables for us&lt;/p&gt;

&lt;p&gt;This is what we have in the init_tables.sql file, we just create a table where we plan to store our URLS.&lt;/p&gt;

&lt;p&gt;Later, we will use URL_ID as our key for the hash ring.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;CREATE TABLE URL_TABLE 
(
 id serial NOT NULL PRIMARY KEY,
 URL text, 
 URL_ID varchar(5)
);
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After you added these files run this command in the directory with your docker template:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;docker build -t my-postgres-shard-image .&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Then, to create and run the Docker container, run:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;docker run --name pgshard1 -p 5434:5432 -d -e POSTGRES_PASSWORD=postgres pgshard&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Note: Repeat this command for as many instances in your cluster as you want. Also, ensure that the port on the left side is unique for each instance. &lt;/p&gt;

&lt;h2&gt;
  
  
  Creating FastAPI app
&lt;/h2&gt;

&lt;p&gt;We will use &lt;a href="https://docs.astral.sh/uv/" rel="noopener noreferrer"&gt;UV package and project manager&lt;/a&gt; to bootstrap our project.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Install uv&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;code&gt;pip install uv&lt;/code&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Create a new project
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;uv init sharding-app
cd sharding-app
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Add FastAPI and Uvicorn&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;code&gt;uv add fastapi uvicorn&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;This will be the content of our &lt;code&gt;main.py&lt;/code&gt; file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import uvicorn
from fastapi import FastAPI

from routes.url_routes import urls_router

app = FastAPI()


app.include_router(urls_router, prefix="/urls")

if __name__ == "__main__":
    uvicorn.run("main:app", host="localhost", port=5001, reload=True)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here we define a command to start our application using the Uvicorn server, and we also add a router for our URL endpoints&lt;/p&gt;

&lt;p&gt;Then we need to create a connection to our postgres cluster. Since we are using an asynchronous Python framework, it is better to use an async PostgreSQL connection as well. The asyncpg module helps with this.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import asyncpg

db_configs = {
    "5432": dict(user='postgres', password='postgres', database='postgres', host='127.0.0.1', port=5432),
    "5433": dict(user='postgres', password='postgres', database='postgres', host='127.0.0.1', port=5433),
    "5434": dict(user='postgres', password='postgres', database='postgres', host='127.0.0.1', port=5434),
}



def get_db_connector() -&amp;gt; callable:
    connection_cache: dict[str, asyncpg.Connection] = {}

    async def connector(port: str) -&amp;gt; asyncpg.Connection:
        # Create or reuse connection
        conn = connection_cache.get(port)
        if not conn or conn.is_closed():
            conn = await asyncpg.connect(**db_configs[port])
            connection_cache[port] = conn

        return conn

    return connector
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Let me explain this code:&lt;/p&gt;

&lt;p&gt;This code defines an asynchronous database connection manager for multiple PostgreSQL instances using asyncpg.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;db_configs&lt;/code&gt; — a dictionary with connection settings for three PostgreSQL servers running on ports 5432, 5433, and 5434.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;get_db_connector()&lt;/code&gt; — returns a function (connector) that can be used to get a connection for a specific port.&lt;/p&gt;

&lt;p&gt;Inside connector(port):&lt;/p&gt;

&lt;p&gt;It checks if a connection for that port already exists and is still open.&lt;/p&gt;

&lt;p&gt;If not, it creates a new async connection using asyncpg.connect() and caches it in connection_cache.&lt;/p&gt;

&lt;p&gt;Then it returns the active connection.&lt;/p&gt;

&lt;h2&gt;
  
  
  Hash ring
&lt;/h2&gt;

&lt;p&gt;Next, we need to use a hash ring to decide which database instance should handle a given request&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from uhashring import HashRing

from db_connection import db_configs

db_hr = HashRing(nodes=list(db_configs.keys()))
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We use HashRing from the &lt;a href="https://pypi.org/project/uhashring/" rel="noopener noreferrer"&gt;uhashring&lt;/a&gt; library to distribute &lt;strong&gt;requests&lt;/strong&gt; consistently across multiple PostgreSQL instances (running on ports 5432, 5433, and 5434). This ensures that each key is always mapped to the same database node, and when nodes are added or removed, only a minimal portion of keys need to be remapped.&lt;/p&gt;

&lt;p&gt;Check out my &lt;a href="https://dev.to/artemooon/fastapi-postgresql-sharding-a-step-by-step-guide-part-1-theory-45l1"&gt;previous article&lt;/a&gt; on consistent hashing, where I explained how hash rings work and why they’re ideal for distributed systems.&lt;/p&gt;

&lt;h2&gt;
  
  
  Adding endpoints and business logic
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import hashlib
import base64
from fastapi import APIRouter, Depends, HTTPException

from db_connection import get_db_connector
from shard_hashing import db_hr

urls_router = APIRouter()


@urls_router.get("/{url_id}")
async def get_url(url_id: str, get_connection=Depends(get_db_connector)):

    db_server_port = db_hr.get(url_id)["hostname"]
    conn = await get_connection(db_server_port)

    try:
        url_data = await conn.fetchrow("SELECT URL, URL_ID FROM url_table WHERE URL_ID=$1", url_id)
    except Exception as e:
        raise HTTPException(status_code=500, detail=f"DB insert failed: {str(e)}")
    finally:
        await conn.close()

    if not url_data:
        raise HTTPException(status_code=404, detail=f"Url with id: {url_id} was not found")

    return {
        "url_id": url_data.get("url_id"),
        "url": url_data.get("url"),
        "server": db_hr.get(url_id)
    }

@urls_router.post("")
async def create_url(full_url: str, get_connection=Depends(get_db_connector)):
    hash_bytes = hashlib.sha256(full_url.encode('utf-8')).digest()

    base64_hash = base64.b64encode(hash_bytes).decode('utf-8')
    url_id = base64_hash[0:5]
    db_server_port = db_hr.get(url_id)["hostname"]

    conn = await get_connection(db_server_port)

    try:
        await conn.execute("INSERT INTO url_table (URL, URL_ID) VALUES($1, $2)", full_url, url_id)
    except Exception as e:
        raise HTTPException(status_code=500, detail=f"DB insert failed: {str(e)}")

    return {
        "url_id": url_id,
        "url": full_url,
        "server": db_hr.get(url_id)
    }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In this final part, we define two FastAPI endpoints that use the hash ring and database connector to distribute and retrieve data across multiple PostgreSQL instances.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;GET /{url_id}&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This endpoint retrieves a stored URL by its unique url_id.&lt;/p&gt;

&lt;p&gt;It uses the hash ring (db_hr) to determine which database node (port) should hold the record for that specific url_id.&lt;/p&gt;

&lt;p&gt;It then fetches the connection from the async connection manager (get_db_connector).&lt;/p&gt;

&lt;p&gt;Then SQL query retrieves the URL record.&lt;/p&gt;

&lt;p&gt;If no record is found, it returns a 404 error; otherwise, it responds with the URL data and the database node info.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;POST /&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This endpoint creates a new short URL record.&lt;/p&gt;

&lt;p&gt;It takes the original full_url, hashes it with SHA-256, and encodes it using Base64, taking the first few characters as a short, unique url_id.&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;url_id&lt;/code&gt; is then passed through the hash ring to select which PostgreSQL instance will store it.&lt;/p&gt;

&lt;p&gt;The URL and its ID are inserted into that database.&lt;/p&gt;

&lt;p&gt;Finally, it returns the created record along with the server node info.&lt;/p&gt;

&lt;p&gt;When combined, this creates an application that works efficiently with a database cluster and uses consistent hashing to operate on the required databases.&lt;/p&gt;

&lt;h2&gt;
  
  
  Additional complications
&lt;/h2&gt;

&lt;p&gt;As you saw, sharding adds additional complexity on the application layer by default. You now need to handle this complexity in your code, such as using consistent hashing, managing multiple database connectors, and ensuring proper data distribution across nodes.&lt;/p&gt;

&lt;p&gt;Sharding also adds additional costs associated with maintaining multiple databases instead of one.&lt;/p&gt;

&lt;h2&gt;
  
  
  Testing our API
&lt;/h2&gt;

&lt;p&gt;Let's start our app and perform some requests.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;curl --location --request POST 'http://localhost:5001/urls?full_url=https://www.postgresql.org/'

curl --location --request POST 'http://localhost:5001/urls?full_url=https://www.google.com/'

curl --location --request POST 'http://localhost:5001/urls?full_url=https://www.dev.to/'
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This will create new URLs in our database cluster. When creating new records, you may see a different server in the server information object because we pass the unique &lt;code&gt;url_id&lt;/code&gt; to the hash ring, which returns the server responsible for that hash.&lt;/p&gt;

&lt;p&gt;When fetching a full URL by &lt;code&gt;url_id&lt;/code&gt;, it will always be retrieved from the same server, since the hash ring consistently maps that &lt;code&gt;url_id&lt;/code&gt; to the same node.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;curl --location --request GET 'http://localhost:5001/urls/0OGWo'&lt;/code&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Short demo
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fplbxax95phkkr40mzpv2.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fplbxax95phkkr40mzpv2.gif" alt="A GIF demonstrating how our URL shortener API works." width="800" height="455"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;In this tutorial, we built a minimal working FastAPI application that uses database sharding by combining a hash ring for consistent key distribution and a connection manager for async PostgreSQL access. With this application, you now have a basic template and understanding of how horizontal database scaling works.&lt;/p&gt;

</description>
      <category>fastapi</category>
      <category>postgres</category>
      <category>distributedsystems</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>FastAPI &amp; PostgreSQL Sharding: A Step-by-Step Guide (Part 1) - Theory</title>
      <dc:creator>Artem</dc:creator>
      <pubDate>Mon, 06 Oct 2025 09:39:35 +0000</pubDate>
      <link>https://forem.com/artemooon/fastapi-postgresql-sharding-a-step-by-step-guide-part-1-theory-45l1</link>
      <guid>https://forem.com/artemooon/fastapi-postgresql-sharding-a-step-by-step-guide-part-1-theory-45l1</guid>
      <description>&lt;p&gt;In this article series, I'll share my experience with the database sharding using PostgreSQL and FastAPI.&lt;/p&gt;

&lt;p&gt;This series consists of two articles. In the first one, I'll cover the theoretical concepts behind sharding, including consistent hashing and other common pitfalls in distributed systems. In the second article, I'll showcase a practical FastAPI application that works with PostgreSQL shards.&lt;/p&gt;

&lt;p&gt;The purpose of this article is to provide a basic understanding of how distributed databases work and the complexities they introduce into the system.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What is sharding?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Sharding, or horizontal database partitioning , is the process of distributing your your system's data across multiple physical servers (called shards) that together form a cluster.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Do we really need sharding?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Before we start I need to say that I did not have experience working with sharding in production systems (and perhaps you too won't have to work with sharding in production) and this is totally fine, as this optimization technique is an extreme measure in my opinion. I recommend to avoid sharding in production applications, because there are others effective optimizations mechanism such as: indexing, partitioning, replication, denormalization and queries optimization.&lt;/p&gt;

&lt;p&gt;Sharding is really required when you are dealing with the billions of rows in a single table and you can not optimize it inside one physical server, then you starting to spawn multiple database server and group them into the cluster.&lt;/p&gt;

&lt;p&gt;With the horizontal partitioning, you can distribute the data across multiple smaller, cheaper servers (which is not really possible when you have a single big instance), this makes your system more resilent as well.&lt;/p&gt;

&lt;p&gt;Also sharding can help with the geographical distribution, for example, user data from Europe can live in a European shard, and user data from the US can be in a US shard - reducing latency and improving compliance with regional data regulations.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Hashing and consistent hashing&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;We need to understand how consistent hashing works before moving forward. If you know how the regular &lt;a href="https://realpython.com/python-hash-table/" rel="noopener noreferrer"&gt;hashing&lt;/a&gt; works it uses a hash function, that accept arbitrary value and returns an integer.&lt;/p&gt;

&lt;p&gt;Example: &lt;code&gt;hash_function("python") = 2393&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Now, imagine you have 4 database shards and you add a new user to them, then we can use this integer to locate needed DB instance. As we have a fixed number of shards we use modular of shards amount in our case is 4.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;shard = hash_function(user_id) % 4&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;This will work fine until one of your shard is removed or added. Let's add a new shard, now we have 5 shards in total.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;shard = hash_function(user_id) % 5&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Now all our users across 4 shards are lost their hash value, which means if you will try to get a user with id that was placed to some instance, this instance number will change and you will not able to locate this user, because  the same user ID now points to a different shard.&lt;/p&gt;

&lt;p&gt;Consistent hashing was invented to solve this problem. In this method we use a circle (usually it called ring), where we place our database servers at different positions around the ring.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnmxz3z3z2utnhhnzfq7e.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnmxz3z3z2utnhhnzfq7e.png" alt=" " width="611" height="455"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;The picture shows: a consistent hashing ring where each server owns a portion of the hash space across the ring.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Let's look at this example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[0 - - - - - - - - - - - - - - - - - - - 360)
 ^ server A      ^ server B      ^ server C
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;User id 1 hashed between A and B, we store it in B&lt;/p&gt;

&lt;p&gt;User id 2 hashed between B and C, we store it in C&lt;/p&gt;

&lt;p&gt;Now we add server D&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[0 - - - - - - - - - - - - - - - - - - - 360)
 ^ server A    ^ server D  ^ server B  ^ server C
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Only the keys between A and B moved to D. Everything else remains on the same servers. &lt;/p&gt;

&lt;p&gt;User id 1 originally hashed between A and B stored on B, but if its hash actually lies between A and D it now maps to D.&lt;/p&gt;

&lt;p&gt;User id 2 hashed between B and C, we store it in C.&lt;/p&gt;

&lt;p&gt;So minimal resharding still happens , only the keys belonging to the neighboring segment are reallocated, instead of having to rehash and redistribute all data across all servers like in traditional hashing.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Loss of traditional ACID transactions across shards&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;When you split data across the multiple shards, familiar ACID transaction system is not available. Operations that need to update data across multiple shards can't rely on the database's built-in transaction guarantees.&lt;/p&gt;

&lt;p&gt;You either need to:&lt;br&gt;
Implement distributed transactions (with two-phase commits or custom coordination), which adds extra complexity and slows down the transaction commit.&lt;/p&gt;

&lt;p&gt;Move toward eventual consistency. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Increased application complexity&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;With the sharding you application queries logic become harder. &lt;br&gt;
You might need to handle cross shard joins, when you need to access data from different shard for single query&lt;br&gt;
Manage shard logic and fallback logic when rebalancing happens&lt;br&gt;
More complex and expensive deployment and backups collection&lt;/p&gt;

&lt;p&gt;In many cases, scaling vertically (and apply optimization techniques for instance), might be cheaper and simpler.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Conclusion&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Now you know what consistent hashing and horizontal database partitioning (also known as sharding) are. We've also discussed the complexity of horizontally scaled systems and some of the pitfalls they introduce.&lt;/p&gt;

&lt;p&gt;In part two, we'll explore a practical implementation using FastAPI and PostgreSQL.&lt;/p&gt;

</description>
      <category>distributedsystems</category>
      <category>programming</category>
      <category>database</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>A Design Pattern Every Python Developer Should Know</title>
      <dc:creator>Artem</dc:creator>
      <pubDate>Mon, 29 Sep 2025 16:25:32 +0000</pubDate>
      <link>https://forem.com/artemooon/a-design-pattern-every-python-developer-should-know-mb8</link>
      <guid>https://forem.com/artemooon/a-design-pattern-every-python-developer-should-know-mb8</guid>
      <description>&lt;p&gt;Today we will look at the very useful design pattern in the modern Python development.&lt;/p&gt;

&lt;p&gt;Meet The &lt;strong&gt;Template method&lt;/strong&gt; pattern&lt;/p&gt;

&lt;p&gt;The Template Method is a design pattern that gives you a master plan for an algorithm. Think of it like a recipe with some specific steps left blank. The main recipe (the "template") is fixed, so you can't change the overall order of the steps. However, you can fill in the blank parts with your own custom code.&lt;/p&gt;

&lt;h2&gt;
  
  
  The problem template method solves
&lt;/h2&gt;

&lt;p&gt;Let's take the real life problem. Imagine you are building a Crawler IP's Management Service responsible for securely identifying incoming requests from known search engine bots (Applebot, Bingbot, Googlebot, etc.) to improve SEO performance of your app. For each specific bot, the system must perform a sequence of standard steps to fetch and cache its valid list of IP addresses:&lt;/p&gt;

&lt;p&gt;Check Cache: Look up the IP list using a bot-specific key in the cache storage.&lt;/p&gt;

&lt;p&gt;Fetch Data: If the cache is empty, request the data from the bot's API URL.&lt;/p&gt;

&lt;p&gt;Process Data: Extract the actual list of IPv4 and IPv6 addresses from the API response format.&lt;/p&gt;

&lt;p&gt;Save Cache: Store the resulting IP list for a set duration.&lt;/p&gt;

&lt;p&gt;The initial implementation involves creating a separate class for every bot (e.g., AppleCrawlersService, DuckDuckGoCrawlersService).&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;class DuckDuckGoCrawlersService:
    """
    A standalone service for managing DuckDuckGo bot IPs.
    This class also contains all the logic, further demonstrating duplication.
    """
    _cache_key = "duckduckgo_bot"
    _ips_list_url = settings.DUCKDUCK_BOT_IPS_URL

    def _extract_ips_from_api_response(self, data):
        """Extracts the list of IPs from the DuckDuckGo API response."""
        content = data.content
        if not content:
            logger.warning(f"No data available to extract IPs for key {self._cache_key}")
            return []

        processed_ips = []
        for line in content.splitlines():
            if line.startswith("- "):
                # Remove '-' and all spaces
                cleaned_line = re.sub(r"[-\s]", "", line)
                processed_ips.append(cleaned_line)
        return processed_ips

    # --- THE DUPLICATED LOGIC STARTS HERE ---

    def _fetch_ips(self, url: str):
        http_client = MockHTTPClientBase(api_base_url=url, api_secret="")
        response = http_client.make_request()
        logger.info(f"Fetched IPs with key: {self._cache_key}")
        return response.content

    def _get_ips_from_cache(self, key: str):
        logger.info(f"Getting crawlers ips from cache: {key}")
        return cache.get(key)

    def _save_ips_to_cache(self, key: str, ip_list: list):
        cache.set(key, ip_list, settings.DEFAULT_CRAWLERS_CACHE_TIMEOUT)
        logger.info(f"Saved crawlers IPs to cache with key: {key}")

    def get_ips_list(self) -&amp;gt; list[str]:
        """
        This entire method is duplicated from the other service classes.
        """
        ips = self._get_ips_from_cache(self._cache_key)
        if ips:
            return ips
        urls = self._ips_list_url if isinstance(self._ips_list_url, list) else [self._ips_list_url]

        with concurrent.futures.ThreadPoolExecutor() as executor:
            responses = list(executor.map(self._fetch_ips, urls))

        ips = list(chain.from_iterable(self._extract_ips_from_api_response(resp) for resp in responses))
        self._save_ips_to_cache(self._cache_key, ips)
        return ips

    # --- DUPLICATED LOGIC ENDS HERE ---


class BingCrawlersService:
    """
    A standalone service for managing Bingbot IPs.
    This class is nearly identical to the Apple service.
    """
    _cache_key = "bingbot"
    _ips_list_url = settings.BINGBOT_IPS_URL

    def _extract_ips_from_api_response(self, data):
        """Extracts the list of IPs from the Bingbot API response."""
        ip_addresses = []
        prefixes = data.get("prefixes", [])
        for prefix in prefixes:
            if settings.CRAWLERS_IP_V4_PREFIX in prefix:
                ip_addresses.append(prefix[settings.CRAWLERS_IP_V4_PREFIX])
            elif settings.CRAWLERS_IP_V6_PREFIX in prefix:
                ip_addresses.append(prefix[settings.CRAWLERS_IP_V6_PREFIX])
        return ip_addresses

    # --- THE DUPLICATED LOGIC STARTS HERE ---

    def _fetch_ips(self, url: str):
        http_client = MockHTTPClientBase(api_base_url=url, api_secret="")
        response = http_client.make_request()
        logger.info(f"Fetched IPs with key: {self._cache_key}")
        return response.content

    def _get_ips_from_cache(self, key: str):
        logger.info(f"Getting crawlers ips from cache: {key}")
        return cache.get(key)

    def _save_ips_to_cache(self, key: str, ip_list: list):
        cache.set(key, ip_list, settings.DEFAULT_CRAWLERS_CACHE_TIMEOUT)
        logger.info(f"Saved crawlers IPs to cache with key: {key}")

    def get_ips_list(self) -&amp;gt; list[str]:
        """
        This entire method is duplicated from the Apple service class.
        """
        ips = self._get_ips_from_cache(self._cache_key)
        if ips:
            return ips
        urls = self._ips_list_url if isinstance(self._ips_list_url, list) else [self._ips_list_url]

        with concurrent.futures.ThreadPoolExecutor() as executor:
            responses = list(executor.map(self._fetch_ips, urls))

        ips = list(chain.from_iterable(self._extract_ips_from_api_response(resp) for resp in responses))
        self._save_ips_to_cache(self._cache_key, ips)
        return ips

    # --- DUPLICATED LOGIC ENDS HERE ---
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This works totally fine! But now imagine if you need to add a new crawler parser in the system or change anything in the existing logic for crawlers fetching. Then you need to copy paste your code, create new classes or change existing working code, this approach is error prone and takes additional efforts.&lt;/p&gt;

&lt;p&gt;Now look at the code again, do you see any common logic on these classes?&lt;/p&gt;

&lt;p&gt;The methods are the same across all crawlers services: &lt;code&gt;_fetch_ips()&lt;/code&gt;, &lt;code&gt;_get_ips_from_cache()&lt;/code&gt;, &lt;code&gt;_save_ips_to_cache()&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;And these might be different implementation per class: &lt;code&gt;_extract_ips_from_api_response()&lt;/code&gt;, &lt;code&gt;ips_list_url&lt;/code&gt;, &lt;code&gt;cache_key&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;We can create a new abstract class and implement all common methods there and also mark unique details as abstract methods.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;class CrawlersServiceABC(ABC):
    @property
    def cache_key(self) -&amp;gt; str:
        """
        The name of the key with which to associate the cached list of crawlers IPs

        Returns:
             str: A name of the key for the cache
        """
        raise NotImplementedError

    @property
    def ips_list_url(self) -&amp;gt; list | str:
        """
        The URL of the resource that the _fetch_ips() method uses to get IP addresses

        Returns:
             str: URL of the resource
        """
        raise NotImplementedError

    @abstractmethod
    def _extract_ips_from_api_response(self, data: dict | requests.Response) -&amp;gt; list[str]:
        """
        Extracts the list of IPs from the response provided by resource

        Returns:
             list[str]: A list of IP addresses associated with the crawler
        """
        raise NotImplementedError

    def _fetch_ips(self, url: str) -&amp;gt; dict | requests.Response:
        http_client = HTTPClientBase(api_base_url=url, api_secret="")

        response = http_client.make_request()
        logger.info(f"Fetched IPs with key: {self.cache_key}")

        return response.content

    def _get_ips_from_cache(self, key: str):
        logger.info(f"Getting crawlers ips from cache: {key}")
        return cache.get(key)

    def _save_ips_to_cache(self, key: str, ip_list: list):
        cache.set(key, ip_list, settings.DEFAULT_CRAWLERS_CACHE_TIMEOUT)
        logger.info(f"Saved crawlers IPs to cache with key: {key}")

    def get_ips_list(self) -&amp;gt; list[str]:
        """
        Single public method that is an entry point for the client's code.
        Either it makes a request to a resource that contains a list of crawlers IPs
        or takes a list of IPs from the cache

        Returns:
             list[str]: A list of IP addresses associated with the crawler
        """
        ips = self._get_ips_from_cache(self.cache_key)
        if ips:
            return ips
        urls = self.ips_list_url if isinstance(self.ips_list_url, list) else [self.ips_list_url]

        # Use ThreadPoolExecutor to fetch IPs concurrently
        with concurrent.futures.ThreadPoolExecutor() as executor:
            responses = list(executor.map(self._fetch_ips, urls))

        # Extract IPs from all responses
        ips = list(chain.from_iterable(self._extract_ips_from_api_response(resp) for resp in responses))
        self._save_ips_to_cache(self.cache_key, ips)
        return ips

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Do not pay much attention to the details, this code might a bit complicated, but the main part here is the &lt;code&gt;get_ips_list()&lt;/code&gt; method.&lt;/p&gt;

&lt;p&gt;This is our &lt;strong&gt;template method&lt;/strong&gt;. It defines the fixed algorithm skeleton. You should generally not redefine this in subclasses.&lt;/p&gt;

&lt;p&gt;Updated subclasses look like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;class AppleCrawlersService(CrawlersServiceABC):
    _cache_key = "applebot"
    _ips_list_url = settings.APPLEBOT_IPS_URL

    def _extract_ips_from_api_response(self, data):
        ip_addresses = []
        prefixes = data.get("prefixes", [])

        for prefix in prefixes:
            if settings.CRAWLERS_IP_V4_PREFIX in prefix:
                ip_addresses.append(prefix[settings.CRAWLERS_IP_V4_PREFIX])
            elif settings.CRAWLERS_IP_V6_PREFIX in prefix:
                ip_addresses.append(prefix[settings.CRAWLERS_IP_V6_PREFIX])

        return ip_addresses

    @property
    def cache_key(self) -&amp;gt; str:
        return self._cache_key

    @property
    def ips_list_url(self) -&amp;gt; str:
        return self._ips_list_url

class DuckDuckGoCrawlersService(CrawlersServiceABC):
    _cache_key = "duckduckgo_bot"
    _ips_list_url = settings.DUCKDUCK_BOT_IPS_URL

    def _fetch_ips(self, url: str):
        try:
            response = requests.get(self._ips_list_url)
            response.raise_for_status()
            logger.info(f"Fetched IPs with key: {self.cache_key}")
            return response
        except requests.RequestException as e:
            logger.warning(f"Error fetching IPs with key {self.cache_key}: {e}")

    def _extract_ips_from_api_response(self, data):
        if not data or not data.text:
            logger.warning(f"No data available to extract IPs for key {self.cache_key}")
            return []

        content = data.text
        processed_ips = []

        for line in content.splitlines():
            if line.startswith("- "):
                # Remove '-' and all spaces
                cleaned_line = re.sub(r"[-\s]", "", line)
                processed_ips.append(cleaned_line)

        return processed_ips

    @property
    def cache_key(self) -&amp;gt; str:
        return self._cache_key

    @property
    def ips_list_url(self) -&amp;gt; str:
        return self._ips_list_url


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now each of them &lt;strong&gt;must&lt;/strong&gt; implement &lt;code&gt;_extract_ips_from_api_response()&lt;/code&gt; method. &lt;/p&gt;

&lt;p&gt;But common methods &lt;code&gt;_get_ips_from_cache()&lt;/code&gt; and &lt;code&gt;_save_ips_to_cache()&lt;/code&gt; are taken from &lt;code&gt;CrawlersServiceABC&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;All common logic is located directly in the &lt;code&gt;CrawlersServiceABC&lt;/code&gt; abstract class, child classes manage the concrete logic of abstract methods, the entire algorithm is built using the template method. Finally, template method can be called by the client code.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apple_crawler_service = AppleCrawlersService()
duck_duck_go_crawler_service = DuckDuckGoCrawlersService()

apple_crawler_service.get_ips_list()
duck_duck_go_crawler_service.get_ips_list()

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You might notice that we also redefined &lt;code&gt;_fetch_ips()&lt;/code&gt; method for the &lt;code&gt;DuckDuckGoCrawlersService&lt;/code&gt;, however it is not abstract method.&lt;/p&gt;

&lt;p&gt;These methods are called &lt;strong&gt;hooks&lt;/strong&gt;. Hooks are methods that have a default, common implementation in the base class, but which can be &lt;strong&gt;optionally&lt;/strong&gt; overridden by a subclass to provide specific behavior.&lt;/p&gt;

&lt;p&gt;We can do the same with the &lt;code&gt;cache_key&lt;/code&gt;, &lt;code&gt;ips_list_url&lt;/code&gt; properties if we want to.&lt;/p&gt;

&lt;h2&gt;
  
  
  So why is it important for Python devs?
&lt;/h2&gt;

&lt;p&gt;Now that we've learned a new design pattern, we need to make sure we know how to recognize when someone else used it. Recognizing the Template Method pattern makes it much easier to understand someone else's code.&lt;/p&gt;

&lt;p&gt;The most ubiquitous example of the Template Method pattern in Python is found in the &lt;a href="https://realpython.com/python-magic-methods/" rel="noopener noreferrer"&gt;Magic Methods&lt;/a&gt; (or "Dunder" methods, like &lt;code&gt;__init__&lt;/code&gt;, &lt;code&gt;__new__&lt;/code&gt;, &lt;code&gt;__str__&lt;/code&gt;, etc.).&lt;/p&gt;

&lt;p&gt;When you implement a class, you often override these built-in methods to customize how your object behaves. The secret here is that Python executes your code the same way as template method executes the code of your hooks in subclasses.&lt;/p&gt;

&lt;p&gt;The process of creating an object in Python is one of the clearest examples of the Template Method pattern being executed by the interpreter itself.&lt;/p&gt;

&lt;p&gt;The Template Method (the overall instantiation process) dictates the fixed order: &lt;code&gt;__new__&lt;/code&gt; then &lt;code&gt;__init__&lt;/code&gt;. The Primitive Operation that developers override to inject their unique setup logic is &lt;code&gt;__init__&lt;/code&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;class BaseObject:
    # __new__ is part of the FIXED TEMPLATE. It creates the object.
    def __new__(cls, *args, **kwargs):
        print("TEMPLATED STEP 1: Allocating memory and creating raw object instance.")
        instance = super().__new__(cls)
        return instance

class UserProfile(BaseObject):
    # __init__ is hook we override.
    # We provide the useful logic for us here.
    def __init__(self, username, is_admin=False):
        print(f"TEMPLATED STEP 2: Running initialization logic for UserProfile('{username}')")
        self.username = username
        self.is_admin = is_admin
        print("TEMPLATED STEP 2 COMPLETE: Object configuration finished.")

# When we call the constructor, the Template Method (instantiation) runs:
print("--- Starting UserProfile(name='Alice') ---")
user = UserProfile(username="Alice")
# The final step of the template returns the initialized object.

print(f"Resulting object: {user.username}, Admin: {user.is_admin}")

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Another real world example is &lt;a href="https://www.django-rest-framework.org/" rel="noopener noreferrer"&gt;DRF's&lt;/a&gt; views system design.&lt;/p&gt;

&lt;p&gt;Remember how you manage your views using DRF's generic views?&lt;/p&gt;

&lt;p&gt;Here is the code of the &lt;code&gt;ListModelMixin&lt;/code&gt; class and the &lt;code&gt;list()&lt;/code&gt; method is implemented as a template method.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;class ListModelMixin:
    """
    List a queryset.
    """
    def list(self, request, *args, **kwargs):
        queryset = self.filter_queryset(self.get_queryset())

        page = self.paginate_queryset(queryset)
        if page is not None:
            serializer = self.get_serializer(page, many=True)
            return self.get_paginated_response(serializer.data)

        serializer = self.get_serializer(queryset, many=True)
        return Response(serializer.data) 

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;When you create your custom view like this one:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;class PostListView(GenericAPIView, ListModelMixin):
    # Primitive Operation 1: Tell the Template which data to use.
    queryset = Post.objects.filter(published=True)

    # Primitive Operation 2: Tell the Template how to structure the data.
    serializer_class = PostSerializer

    # Template Method Hook: The base ListModelMixin.list() method calls
    def get(self, request, *args, **kwargs):
        return self.list(request, *args, **kwargs)

# The list() method (inside ListModelMixin) is the Template:
# 1. Calls get_queryset() (which uses our Primitive Operation: queryset)
# 2. Checks for pagination
# 3. Calls get_serializer() (which uses our Primitive Operation: serializer_class)
# 4. Returns Response

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You are using a template method pattern without even noticing it, Django create the skeleton of the algorithm for you and you just override the hooks. &lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;The Template Method pattern solves code duplication by defining a fixed algorithm in a base class, forcing subclasses to only implement the unique steps. Python developers may use this pattern implicitly in their code, but understanding The Template Method makes it much easier to understand other people's code and helps create a flexible architecture.&lt;/p&gt;

</description>
      <category>architecture</category>
      <category>designpatterns</category>
      <category>python</category>
    </item>
    <item>
      <title>Django + PgBouncer in Production: Pitfalls, Fixes, and Survival Tricks</title>
      <dc:creator>Artem</dc:creator>
      <pubDate>Sun, 14 Sep 2025 16:27:05 +0000</pubDate>
      <link>https://forem.com/artemooon/django-pgbouncer-in-production-pitfalls-fixes-and-survival-tricks-3jib</link>
      <guid>https://forem.com/artemooon/django-pgbouncer-in-production-pitfalls-fixes-and-survival-tricks-3jib</guid>
      <description>&lt;p&gt;In this article I will tell you about my experience of using &lt;a href="https://www.pgbouncer.org/" rel="noopener noreferrer"&gt;PgBouncer&lt;/a&gt; with the Production Django application, and how it worked for us and what difficulties we met.&lt;/p&gt;

&lt;p&gt;First, I’ll explain why we needed a connection pooler like PgBouncer and how it helps solve common database connection overhead problems. After that, I will guide you through our installation process and share our experience using it in a production environment, including the specific problems we faced and the solutions we implemented&lt;/p&gt;

&lt;h2&gt;
  
  
  Why did we need to add PgBouncer?
&lt;/h2&gt;

&lt;p&gt;Our backend wasn’t constantly flooded with users, but during ad campaigns we saw huge traffic spikes that created hundreds of open connections to PostgreSQL instance. This connection overhead was a significant bottleneck. So, we decided to offload the connection burden from the database by adding a connection pooler.&lt;/p&gt;

&lt;p&gt;To understand the problem, we first need to look at how PostgreSQL handles client connections. Every time a new connection is established, the following occurs: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://en.wikipedia.org/wiki/Handshake_(computing)" rel="noopener noreferrer"&gt;TCP Handshake&lt;/a&gt;: A three-way handshake between the client (our Django app) and the database to establish a connection.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;If &lt;a href="https://en.wikipedia.org/wiki/Transport_Layer_Security" rel="noopener noreferrer"&gt;TLS/SSL&lt;/a&gt; is enabled, another handshake to secure the connection&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Authentication, the database authenticates the user's credentials&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In addition to these steps, each open connection consumes a certain amount of memory and CPU resources. This can add an unwanted extra load on the database server. Furthermore, if the number of connections exceeds the configured maximum, new connection requests can fail.&lt;/p&gt;

&lt;p&gt;PgBouncer's core job here is to maintain a fixed number of "hot" connections to the database. This effectively solves the problem by allowing the expensive, three-step connection process that I described above, to happen only once for each connection in the pool.&lt;/p&gt;

&lt;p&gt;When our Django app needs a connection, it talks to PgBouncer, which immediately provides a ready-to-use connection from its pool. After the query or transaction is finished, PgBouncer immediately returns the connection to the pool, ready for the next client to use.&lt;/p&gt;

&lt;p&gt;You can always check the max_connections limit for your specific database (for example, on &lt;a href="https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_Limits" rel="noopener noreferrer"&gt;Amazon RDS&lt;/a&gt;). If you can predict a fairly fixed and manageable amount of connections, you might not need to add the complexity of a connection pooler to your architecture. However, in our case, the unpredictable traffic spikes made a connection pooler a necessity.&lt;/p&gt;

&lt;h2&gt;
  
  
  Installation and configuration process
&lt;/h2&gt;

&lt;p&gt;Installing and setting up PgBouncer is pretty straightforward, you can use official &lt;a href="https://hub.docker.com/r/bitnami/pgbouncer/" rel="noopener noreferrer"&gt;docker image&lt;/a&gt; or build it directly from &lt;a href="https://www.pgbouncer.org/install.html" rel="noopener noreferrer"&gt;sources&lt;/a&gt;. &lt;/p&gt;

&lt;p&gt;We chose to use Docker container as we use ECS for our deployment process.&lt;/p&gt;

&lt;p&gt;The core of PgBouncer's setup is &lt;code&gt;pgbouncer.ini&lt;/code&gt; configuration file or if you use Docker container you can specify file with the environment variables. These are the parameters we used:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;PGBOUNCER_LISTEN_ADDRESS=*
PGBOUNCER_POOL_MODE=transaction
PGBOUNCER_PORT=6432
PGBOUNCER_MAX_CLIENT_CONN=1000
PGBOUNCER_DEFAULT_POOL_SIZE=80
PGBOUNCER_MIN_POOL_SIZE=30
PGBOUNCER_AUTH_TYPE=md5
POSTGRESQL_PASSWORD=&amp;lt;YOUR_PASSWORD&amp;gt;
POSTGRESQL_HOST=&amp;lt;YOUR_HOST&amp;gt;
POSTGRESQL_PORT=&amp;lt;POSTGRES_PORT&amp;gt;
POSTGRESQL_USERNAME=&amp;lt;POSTGRES_USERNAME&amp;gt;
POSTGRESQL_DATABASE=&amp;lt;POSTGRES_DATABASE&amp;gt;
PGBOUNCER_DATABASE=&amp;lt;PGBOUNCER_DATABASE&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here is the overview of interesting parameters&lt;/p&gt;

&lt;p&gt;&lt;code&gt;PGBOUNCER_LISTEN_ADDRESS&lt;/code&gt;: Specifies the network interface PgBouncer listens on for incoming client connections. We set it to *, which means it listens on all available interfaces. This is not recommended to use * unless you have correctly configured your security groups on a platform like AWS to restrict incoming traffic. For better security, you should use a more specific address like the private IP of the host or a specific network interface.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;PGBOUNCER_POOL_MODE&lt;/code&gt;: A setting that defines how PgBouncer manages its connections. We chose transaction mode, as it reuses connections after each transaction. Detailed about it below.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;PGBOUNCER_MAX_CLIENT_CONN&lt;/code&gt;: The maximum number of total client connections your PgBouncer instance will accept.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;PGBOUNCER_DEFAULT_POOL_SIZE&lt;/code&gt;: The number of "hot" connections PgBouncer maintains to the database for each user and database pair. Set this value to your usual connections amount.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;PGBOUNCER_MIN_POOL_SIZE&lt;/code&gt;: The minimum number of connections PgBouncer will maintain in the pool, even during periods of low activity. This helps reduce latency by keeping connections ready.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;PGBOUNCER_AUTH_TYPE&lt;/code&gt;: The authentication method PgBouncer uses to verify clients.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;POSTGRESQL_HOST&lt;/code&gt;, &lt;code&gt;POSTGRESQL_PORT&lt;/code&gt;, &lt;code&gt;POSTGRESQL_USERNAME&lt;/code&gt;, &lt;code&gt;POSTGRESQL_DATABASE&lt;/code&gt;: These parameters tell PgBouncer how to connect to the actual PostgreSQL database. They are the credentials for the database itself, not the client.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Security note&lt;/strong&gt;: you should make PgBouncer accessible only from within your trusted network (preferably only for the backend instance) and treat access to it with the same level of caution as you would a direct connection to your database.&lt;/p&gt;

&lt;h2&gt;
  
  
  The difficult choice of the pool mode
&lt;/h2&gt;

&lt;p&gt;There are 3 types of pool mode for PgBoucner: &lt;strong&gt;Session mode&lt;/strong&gt;, &lt;strong&gt;Transaction mode&lt;/strong&gt;, &lt;strong&gt;Statement mode&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Statement mode&lt;/strong&gt; is the most aggressive and performant pooling mode, but it's the least compatible with most applications because it breaks transactions support, on the other hand &lt;strong&gt;session mode&lt;/strong&gt; is the most compatible pooling mode for the usual application, but it's the least performant, however it still gives you the advantage of "hot" tcp connections pool. &lt;strong&gt;Transaction mode&lt;/strong&gt; allocates a connection per transaction and is somewhere in the middle of the three pooling modes.&lt;/p&gt;

&lt;p&gt;There is no single "best" mode; the right choice depends on your application's unique needs. Transaction mode is the standard for most Django apps, but if your app relies on session-level features, session mode is a necessary compromise.&lt;/p&gt;

&lt;p&gt;So we started with session mode on our dev server, then used it in production and confirmed everything was stable. Afterward, we enabled transaction mode on production, which required some tricks to get working.&lt;/p&gt;

&lt;h2&gt;
  
  
  The tricks we used to solve transaction mode issues and make our Django application reliable
&lt;/h2&gt;

&lt;p&gt;In this section, I am describing complexities with the PgBouncer and providing solutions for them.&lt;/p&gt;

&lt;h3&gt;
  
  
  Migrations
&lt;/h3&gt;

&lt;p&gt;When running Django migrations through PgBouncer in transaction pooling mode, some operations fail because they require a single session with an open transaction. For example:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;CREATE INDEX CONCURRENTLY or DROP INDEX CONCURRENTLY&lt;br&gt;
These statements are disallowed inside a transaction block, but Django wraps migrations in transactions by default.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Long‐running schema changes (e.g., adding constraints) may get interrupted because transaction pooling swaps connections between queries.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Advisory locks (pg_advisory_lock) and similar session-based features also fail under transaction pooling.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;To solve it we run migrations against PostgreSQL directly, not through PgBouncer:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# set PG_BOUNCER_HOST and PG_BOUNCER_PORT to none to run migration directly against Postgresql
PG_BOUNCER_HOST= PG_BOUNCER_PORT= python manage.py migrate
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Search path
&lt;/h3&gt;

&lt;p&gt;We used a custom schema for storing our tables and specified it with the search path in the database connection settings&lt;/p&gt;

&lt;p&gt;We used a custom schema for storing our tables and specified it with the &lt;a href="https://www.postgresql.org/docs/current/ddl-schemas.html#DDL-SCHEMAS-PATH" rel="noopener noreferrer"&gt;search path&lt;/a&gt; in the database connection settings.&lt;/p&gt;

&lt;p&gt;When PgBouncer is running in transaction pooling mode, every query may use a different backend connection. That means any session-level settings (like search_path, SET ROLE, SET TIMEZONE, etc.) do not persist across queries.&lt;/p&gt;

&lt;p&gt;This works fine when Django talks directly to Postgres, because each session is pinned to a single connection.&lt;br&gt;
But with PgBouncer in transaction pooling, the search_path setting disappears as soon as the transaction ends. The next query may land on a different connection that doesn’t know about your search_path override.&lt;/p&gt;

&lt;p&gt;This may lead to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;A migration that tries to create a table may end up running in the default public schema, ignoring your schema defined in the search path.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;You end up with “phantom” tables in public, while your app is looking in your defined schema.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;We fixed it by making our non-default schema the actual default schema, and stopped relying on search_path:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;We altered the database schema setup so that our custom is the default. Using &lt;code&gt;ALTER ROLE &amp;lt;db_user&amp;gt; SET search_path = &amp;lt;your_schema&amp;gt;;&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Then we removed the OPTIONS → -c search_path=cryptonary_wp2 entirely from Django’s database settings.&lt;/li&gt;
&lt;li&gt;Then we migrated tables from public schema to our custom schema 
using sql script&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;
  
  
  Disabling server side cursor
&lt;/h3&gt;

&lt;p&gt;Django can stream large querysets efficiently by using server-side cursors under the hood — for example, when you call .iterator(), rows are fetched incrementally instead of all being loaded into memory at once.&lt;/p&gt;

&lt;p&gt;However, PgBouncer in transaction pooling mode breaks this.&lt;/p&gt;

&lt;p&gt;Server-side cursors depend on the connection being pinned to the same backend for the lifetime of the cursor.&lt;/p&gt;

&lt;p&gt;Transaction pooling reuses connections between queries, so as soon as the transaction ends, the cursor is lost.&lt;/p&gt;

&lt;p&gt;In practice, this results in errors like cursor does not exist when iterating, or worse, inconsistent data reads.&lt;/p&gt;

&lt;p&gt;For example, we periodically saw this exception when browsing the Django admin panel:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;InvalidCursorName: cursor "_django_curs_140116399986368_sync_1" does not exist&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;To avoid this, Django has a built-in option for the database connection:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;DATABASES = {&lt;br&gt;
            "DISABLE_SERVER_SIDE_CURSORS": True,&lt;br&gt;
        }&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;With this setting enabled, Django’s .iterator() will fetch the entire result set into backend memory. That means your memory usage can grow significantly when iterating over large querysets, which may affect your application performance.&lt;/p&gt;

&lt;p&gt;You could use an extra direct connection to the database if you need server side cursors benefits and then use the direct connection for those queries:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;YourModel.objects.using('direct_db_connection').filter().iterator()&lt;br&gt;
&lt;/code&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  Reliability: Fallback from PgBouncer to Postgres
&lt;/h3&gt;

&lt;p&gt;We also added a simple reliability check in our settings to make sure the application can still connect even if PgBouncer is down or misconfigured.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;try:
    if PG_BOUNCER_HOST and PG_BOUNCER_PORT:
        conn = psycopg2.connect(
            dbname=os.environ.get("POSTGRES_NAME"),
            user=os.environ.get("POSTGRES_USER"),
            password=os.environ.get("POSTGRES_PASSWORD"),
            host=PG_BOUNCER_HOST,
            port=PG_BOUNCER_PORT,
            connect_timeout=2,
        )
        conn.close()
        DB_HOST = PG_BOUNCER_HOST
        DB_PORT = PG_BOUNCER_PORT
    else:
        raise OperationalError("No PgBouncer host set")

except OperationalError:
    DB_HOST = POSTGRES_HOST
    DB_PORT = POSTGRES_PORT
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This provides a graceful degradation path: the app may lose PgBouncer’s pooling benefits temporarily, but it will stay online and functional.&lt;/p&gt;

&lt;p&gt;I think that was a lot, right? Do you still want to connect PgBouncer to your Django application or &lt;a href="https://docs.djangoproject.com/en/5.2/ref/databases/" rel="noopener noreferrer"&gt;CONN_MAX_AGE&lt;/a&gt; saves the day?&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;PgBouncer is the right tool to help scale your Postgres app when the number of database connections becomes an issue. Keep in mind that while adding PgBouncer introduces additional complexity for your team, careful setup, monitoring, and tuning should improve your app’s performance.&lt;/p&gt;

&lt;p&gt;That said, PgBouncer should only be introduced if you really need it. It adds complexity, hidden caveats (like migrations, search path resets, server-side cursors), and operational costs. If you do adopt it, make sure you fully understand these trade-offs and adjust your Django configuration accordingly.&lt;/p&gt;

</description>
      <category>python</category>
      <category>webdev</category>
      <category>database</category>
      <category>django</category>
    </item>
  </channel>
</rss>
