<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Rafał Szczepanik</title>
    <description>The latest articles on Forem by Rafał Szczepanik (@rafalsz).</description>
    <link>https://forem.com/rafalsz</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/rafalsz"/>
    <language>en</language>
    <item>
      <title>Docker Compose and Devcontainers for Microservices Development</title>
      <dc:creator>Rafał Szczepanik</dc:creator>
      <pubDate>Mon, 28 Apr 2025 16:12:40 +0000</pubDate>
      <link>https://forem.com/rafalsz/docker-compose-and-devcontainers-for-microservices-development-44b8</link>
      <guid>https://forem.com/rafalsz/docker-compose-and-devcontainers-for-microservices-development-44b8</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;Imagine a scenario - you are working as a developer in a fast-growing company. On board there are many teams, developing separate services that intertwine between each other. Let's say you have:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;App 1&lt;/strong&gt; - developed mostly by Team A, with occasional changes by Team B and C. It communicates with App 2 and App 3 via REST.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;App 2&lt;/strong&gt; - developed primarily by Team B and A. It serves as an API for both App 1 and 3, while also communicating with other microservices via GRPC or Kafka.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;App 3&lt;/strong&gt; - developed mostly by Team C, with support from Team A, and occasional changes by Team B. This one interacts with microservice Y.&lt;/li&gt;
&lt;li&gt;…plus &lt;strong&gt;5 more microservices,&lt;/strong&gt; each with its own dependencies, databases, or message queues, developed by various combinations of teams.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The point is - developers frequently need to work on &lt;em&gt;multiple&lt;/em&gt; services, often simultaneously. They need to see how their changes affect service they are working on, and all dependent services - all without the friction and delay of deploying to a shared staging environment. Onboarding new team members needs to be swift, not a week-long setup marathon wrestling with dependencies.&lt;/p&gt;

&lt;p&gt;So how to approach this problem?&lt;/p&gt;

&lt;h2&gt;
  
  
  Traditional local development
&lt;/h2&gt;

&lt;p&gt;The approach we all know. In this flow, we setup every service we need locally on our machine. Manually installing all dependencies, following steps in README. This is good enough for single repositories, however will come with problems at scale:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Port management&lt;/strong&gt; - every service has to have its own set of ports. These addresses will have to be configured in other services. It is not easy to tell, what address &lt;code&gt;localhost:3000&lt;/code&gt; , &lt;code&gt;localhost:3001&lt;/code&gt; or &lt;code&gt;localhost:3010&lt;/code&gt; is responsible for.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Dependency management&lt;/strong&gt; - every project will need their own versions of NodeJS, Golang, Ruby, etc…&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;"Works on my machine"&lt;/strong&gt; - differences in OS, installed packages, and configurations leading to inconsistencies.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;CORS, Webstorage API, etc.&lt;/strong&gt; - different ports mean different origins. This causes many issues for frontend apps, like blocked requests or missing cookies.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Onboarding difficulties&lt;/strong&gt; - every new team member has to go through many README files. Every one of this readme's will differ from each other.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Devcontainers with Docker Compose and Traefik
&lt;/h2&gt;

&lt;p&gt;This tutorial assumes familiarity with &lt;a href="https://docs.docker.com/get-started/docker-overview/" rel="noopener noreferrer"&gt;Docker&lt;/a&gt;, &lt;a href="https://docs.docker.com/compose/" rel="noopener noreferrer"&gt;Docker Compose&lt;/a&gt;, &lt;a href="https://containers.dev/" rel="noopener noreferrer"&gt;Devcontainers&lt;/a&gt; and that your services have &lt;code&gt;Dockerfile&lt;/code&gt; implemented&lt;/p&gt;

&lt;h3&gt;
  
  
  Concept Overview
&lt;/h3&gt;

&lt;p&gt;The solution consists of:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A central repository containing:

&lt;ul&gt;
&lt;li&gt;A makefile with utilities (e.g., &lt;code&gt;init&lt;/code&gt; to pull service repositories)&lt;/li&gt;
&lt;li&gt;A docker-compose configuration for orchestrating services&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;Service repositories with:

&lt;ul&gt;
&lt;li&gt;Dockerfiles for containerization&lt;/li&gt;
&lt;li&gt;Devcontainer configurations for development&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;a href="https://doc.traefik.io/traefik/" rel="noopener noreferrer"&gt;Traefik&lt;/a&gt; integration:

&lt;ul&gt;
&lt;li&gt;Routes traffic to services. For example, you can have multiple frontends accessible on the same domain, like &lt;code&gt;localhost:3000/svc1&lt;/code&gt; , &lt;code&gt;localhost:3000/svc2&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;Provides a dashboard (&lt;code&gt;localhost:5555&lt;/code&gt; by default) to inspect configurations of routings.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsuvoygq2r662lyftz9fw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsuvoygq2r662lyftz9fw.png" alt="Traefik Docker Provider Diagram" width="800" height="548"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Source: &lt;a href="https://doc.traefik.io/traefik/providers/docker/" rel="noopener noreferrer"&gt;Traefik Docker Provider Documentation&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  Key Benefits
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Flexible service management:&lt;/strong&gt;

&lt;ul&gt;
&lt;li&gt;Run selected services using COMPOSE_PROFILES&lt;/li&gt;
&lt;li&gt;Stop, rebuild, and restart services with ease&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Seamless development experience:&lt;/strong&gt;

&lt;ul&gt;
&lt;li&gt;Switch between running containerized services and developing in devcontainers&lt;/li&gt;
&lt;li&gt;Maintain identical networking configurations in both environments&lt;/li&gt;
&lt;li&gt;No more conflicting ports&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;
  
  
  Setup base compose repository
&lt;/h3&gt;

&lt;p&gt;The first step is to create an empty repository, then inside it create a &lt;code&gt;Makefile&lt;/code&gt; with essential procedures:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight make"&gt;&lt;code&gt;&lt;span class="nv"&gt;REPOS&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
    git@github.com:some/repository1.git &lt;span class="se"&gt;\&lt;/span&gt;
    git@github.com:some/repository2.git &lt;span class="se"&gt;\&lt;/span&gt;
    git@github.com:some/repository3.git &lt;span class="se"&gt;\&lt;/span&gt;
    ...

&lt;span class="nl"&gt;.PHONY&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="nf"&gt;init dev down&lt;/span&gt;

&lt;span class="nl"&gt;$(REPO_DIRS)&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt;
    git clone &lt;span class="nt"&gt;--filter&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;blob:none &lt;span class="p"&gt;$(&lt;/span&gt;filter %&lt;span class="nv"&gt;$@&lt;/span&gt;.git,&lt;span class="p"&gt;$(&lt;/span&gt;REPOS&lt;span class="p"&gt;));&lt;/span&gt;

&lt;span class="c"&gt;# Responsible for pulling repositories
&lt;/span&gt;&lt;span class="nl"&gt;init&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="nf"&gt;$(REPO_DIRS)&lt;/span&gt;

&lt;span class="c"&gt;# Will be used as a shortcut to start all services
&lt;/span&gt;&lt;span class="nl"&gt;dev&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt;
    docker compose up &lt;span class="nt"&gt;-d&lt;/span&gt;

&lt;span class="c"&gt;# Stops and removes all docker services
&lt;/span&gt;&lt;span class="nl"&gt;down&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt;
    &lt;span class="nv"&gt;COMPOSE_PROFILES&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;all docker compose down
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For Traefik to work correctly, we first need to add a &lt;code&gt;traefik.yml&lt;/code&gt; configuration file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;log&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;level&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;INFO&lt;/span&gt;

&lt;span class="na"&gt;api&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;dashboard&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
  &lt;span class="na"&gt;insecure&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;

&lt;span class="na"&gt;providers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;docker&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;endpoint&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;unix:///var/run/docker.sock&lt;/span&gt;
    &lt;span class="na"&gt;exposedByDefault&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;
    &lt;span class="c1"&gt;# This must match the network name used by your services&lt;/span&gt;
    &lt;span class="na"&gt;network&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;shared-network&lt;/span&gt;

&lt;span class="na"&gt;ping&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;{}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Next, let's create &lt;code&gt;compose.yml&lt;/code&gt;. In this file we will:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Create a Traefik service to handle routing between our microservices&lt;/li&gt;
&lt;li&gt;Define all our microservices with appropriate configurations&lt;/li&gt;
&lt;li&gt;Create an attachable network, allowing devcontainers to connect to the same network&lt;/li&gt;
&lt;li&gt;Assign profiles to each service for selective launching:

&lt;ul&gt;
&lt;li&gt;Each service gets its own profile (same as the &lt;code&gt;service name&lt;/code&gt;)&lt;/li&gt;
&lt;li&gt;All services get the &lt;code&gt;all&lt;/code&gt; profile for stopping all services at once&lt;/li&gt;
&lt;li&gt;You can group related services by assigning the same profile name to both. For example, if a database is useless without its API service, give both the same profile name so they always start together&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;Configure the build process for each service, including environment variable handling&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;Here's the complete &lt;code&gt;compose.yml&lt;/code&gt; configuration:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;services&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;traefik&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;traefik:v2.11.0&lt;/span&gt;
    &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;3000:80&lt;/span&gt; &lt;span class="c1"&gt;# Services you expose will be visible on localhost:3000/**&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;5555:8080&lt;/span&gt; &lt;span class="c1"&gt;# This is a port for a traefik dashboard&lt;/span&gt;
    &lt;span class="na"&gt;volumes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;./traefik.yml:/etc/traefik/traefik.yml:ro&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;/var/run/docker.sock:/var/run/docker.sock&lt;/span&gt;
    &lt;span class="na"&gt;command&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;--configFile=/etc/traefik/traefik.yml&lt;/span&gt;
    &lt;span class="na"&gt;networks&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;default&lt;/span&gt;
    &lt;span class="na"&gt;postgres-db&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;profiles&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;postgres-db&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;all&lt;/span&gt;
    &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;postgres:16.2&lt;/span&gt;
    &lt;span class="na"&gt;networks&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;default&lt;/span&gt;
    &lt;span class="s"&gt;...&lt;/span&gt;

  &lt;span class="c1"&gt;# Lets assume svc1 is one of your repositories&lt;/span&gt;
    &lt;span class="na"&gt;svc1&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;hostname&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;svc1&lt;/span&gt;
        &lt;span class="na"&gt;profiles&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;svc1&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;all&lt;/span&gt;
        &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;svc1:latest&lt;/span&gt;
        &lt;span class="na"&gt;env_file&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="c1"&gt;# here is a list of env files, that will be loaded&lt;/span&gt;
        &lt;span class="c1"&gt;# when service is started&lt;/span&gt;
        &lt;span class="c1"&gt;# (important - these won't be loaded for build process, for that&lt;/span&gt;
        &lt;span class="c1"&gt;# we will use something else)&lt;/span&gt;
    &lt;span class="err"&gt;  &lt;/span&gt;&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;svc1/.env.example&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;svc2/.env.local&lt;/span&gt; &lt;span class="c1"&gt;# if exists, .env.local will override variables coming from .env.example&lt;/span&gt;
        &lt;span class="na"&gt;required&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;
        &lt;span class="na"&gt;networks&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;default&lt;/span&gt;
        &lt;span class="na"&gt;build&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="na"&gt;context&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;./svc1&lt;/span&gt;
            &lt;span class="na"&gt;dockerfile&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Dockerfile&lt;/span&gt;
            &lt;span class="na"&gt;args&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="c1"&gt;# these arguments will be passed to Dockerfile ARGS&lt;/span&gt;
            &lt;span class="c1"&gt;# you can't use here directly env files, so you have to&lt;/span&gt;
            &lt;span class="c1"&gt;# set them manually to some env variable, along with default value:&lt;/span&gt;
                &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;API_KEY=${API_KEY:-your_default_api_key}&lt;/span&gt;
                &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;LOG_LEVEL=${LOG_LEVEL:-info}&lt;/span&gt;

        &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="c1"&gt;# If you want to expose this service on localhost:3000/svc1 add this&lt;/span&gt;
            &lt;span class="c1"&gt;# Will route any traffic matching the rule to svc1 on port 5173&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;traefik.enable=true&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;traefik.http.services.svc1.loadbalancer.server.port=5173&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;traefik.http.routers.svc1.rule=(Host(`localhost`) || Host(`host.docker.internal`)) &amp;amp;&amp;amp; PathPrefix(`/svc1`)&lt;/span&gt;

        &lt;span class="s"&gt;...&lt;/span&gt;

  &lt;span class="c1"&gt;# Placeholder for another service (e.g., svc2)&lt;/span&gt;
  &lt;span class="na"&gt;svc2&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;hostname&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;svc2&lt;/span&gt;
    &lt;span class="na"&gt;profiles&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;svc2&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;all&lt;/span&gt;
    &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;svc2:latest&lt;/span&gt;
    &lt;span class="na"&gt;env_file&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;svc2/.env.example&lt;/span&gt;
    &lt;span class="na"&gt;networks&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;default&lt;/span&gt;
    &lt;span class="na"&gt;build&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;context&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;./svc2&lt;/span&gt;
      &lt;span class="na"&gt;dockerfile&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Dockerfile&lt;/span&gt;
    &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="c1"&gt;# Expose svc2 on localhost:3000/svc2, routing to its internal port (e.g., 8080)&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;traefik.enable=true&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;traefik.http.services.svc2.loadbalancer.server.port=8080&lt;/span&gt; &lt;span class="c1"&gt;# Adjust port as needed&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;traefik.http.routers.svc2.rule=(Host(`localhost`) || Host(`host.docker.internal`)) &amp;amp;&amp;amp; PathPrefix(`/svc2`)&lt;/span&gt;

&lt;span class="na"&gt;networks&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;default&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="c1"&gt;# This is crucial for devcontainers to be able to connect to this network&lt;/span&gt;
    &lt;span class="na"&gt;attachable&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
    &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;shared-network&lt;/span&gt;

&lt;span class="na"&gt;volumes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="s"&gt;...&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now let's create &lt;code&gt;.env&lt;/code&gt; and &lt;code&gt;.env.example&lt;/code&gt; files. The &lt;code&gt;.env&lt;/code&gt; file is loaded automatically when running docker compose:&lt;/p&gt;

&lt;p&gt;Example &lt;code&gt;.env&lt;/code&gt; file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;COMPOSE_PROFILES&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;postgres-db,svc1,...
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now you can run &lt;code&gt;make dev&lt;/code&gt;, which will start all services listed in the compose profiles. When an image is not available (like in svc1's case), it will build the image based on the instructions provided.&lt;/p&gt;

&lt;p&gt;The build process can be tricky. The reason we added default values is that there's no easy way to access env files during the first launch.&lt;/p&gt;

&lt;h3&gt;
  
  
  Enhance rebuilding flow
&lt;/h3&gt;

&lt;p&gt;Our setup revolves around keeping all environment files for services in their respective directories. Now let's implement a better process for rebuilding images.&lt;/p&gt;

&lt;p&gt;We'll introduce a new &lt;code&gt;build&lt;/code&gt; command in our Makefile, which will first load env files related to the service, and then rebuild the image:&lt;/p&gt;

&lt;p&gt;To do this, add in your &lt;code&gt;Makefile&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight make"&gt;&lt;code&gt;&lt;span class="err"&gt;[...&lt;/span&gt; &lt;span class="err"&gt;existing&lt;/span&gt; &lt;span class="err"&gt;code&lt;/span&gt; &lt;span class="err"&gt;...]&lt;/span&gt;

&lt;span class="c"&gt;# List of all env files, in order of least relevant to most relevant
# build procedure will try to load all of them. Customize for your use case.
&lt;/span&gt;&lt;span class="nv"&gt;ENV_FILES&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; .env.example .env.development .env .env.local .env.development.local

&lt;span class="nl"&gt;build&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt;
    &lt;span class="c"&gt;# If 'build' is called without specific service names, build all services defined in compose.&lt;/span&gt;
    &lt;span class="p"&gt;@&lt;/span&gt;&lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt; &lt;span class="nt"&gt;-z&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="p"&gt;$(&lt;/span&gt;&lt;span class="s2"&gt;filter-out &lt;/span&gt;&lt;span class="nv"&gt;$@&lt;/span&gt;&lt;span class="s2"&gt;,&lt;/span&gt;&lt;span class="p"&gt;$(&lt;/span&gt;&lt;span class="s2"&gt;MAKECMDGOALS&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="o"&gt;]&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;then&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
        docker compose build&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
        &lt;span class="nb"&gt;exit &lt;/span&gt;0&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="k"&gt;fi&lt;/span&gt;
    &lt;span class="c"&gt;# Get the list of service names passed as arguments &lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;e.g., make build svc1 svc2&lt;span class="o"&gt;)&lt;/span&gt;
    &lt;span class="p"&gt;@&lt;/span&gt;&lt;span class="nv"&gt;services&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="p"&gt;$(&lt;/span&gt;&lt;span class="s2"&gt;filter-out &lt;/span&gt;&lt;span class="nv"&gt;$@&lt;/span&gt;&lt;span class="s2"&gt;,&lt;/span&gt;&lt;span class="p"&gt;$(&lt;/span&gt;&lt;span class="s2"&gt;MAKECMDGOALS&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="c"&gt;# Loop through each specified service name&lt;/span&gt;
    &lt;span class="k"&gt;for &lt;/span&gt;service &lt;span class="k"&gt;in&lt;/span&gt; &lt;span class="nv"&gt;$$&lt;/span&gt;services&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;do&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
        &lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"Building service: &lt;/span&gt;&lt;span class="nv"&gt;$$&lt;/span&gt;&lt;span class="s2"&gt;service"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
        &lt;span class="c"&gt;# Temporarily export variables sourced from env files so docker compose build can see them&lt;/span&gt;
        &lt;span class="nb"&gt;set&lt;/span&gt; &lt;span class="nt"&gt;-a&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
        &lt;span class="c"&gt;# Loop through the predefined list of env file names/suffixes&lt;/span&gt;
        &lt;span class="k"&gt;for &lt;/span&gt;env_suffix &lt;span class="k"&gt;in&lt;/span&gt; &lt;span class="p"&gt;$(&lt;/span&gt;ENV_FILES&lt;span class="p"&gt;);&lt;/span&gt; &lt;span class="k"&gt;do&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
            &lt;span class="c"&gt;# Check if the env file exists in the service's directory&lt;/span&gt;
            &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt; &lt;span class="nt"&gt;-f&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$$&lt;/span&gt;&lt;span class="s2"&gt;service/&lt;/span&gt;&lt;span class="nv"&gt;$$&lt;/span&gt;&lt;span class="s2"&gt;env_suffix"&lt;/span&gt; &lt;span class="o"&gt;]&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;then&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
                &lt;span class="c"&gt;# Source the file, loading its variables into the environment&lt;/span&gt;
                &lt;span class="nb"&gt;source&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$$&lt;/span&gt;&lt;span class="s2"&gt;service/&lt;/span&gt;&lt;span class="nv"&gt;$$&lt;/span&gt;&lt;span class="s2"&gt;env_suffix"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
            &lt;span class="k"&gt;fi&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
        &lt;span class="k"&gt;done&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
        &lt;span class="c"&gt;# Stop exporting variables&lt;/span&gt;
        &lt;span class="nb"&gt;set&lt;/span&gt; +a&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
        docker compose build &lt;span class="nv"&gt;$$&lt;/span&gt;service &lt;span class="o"&gt;||&lt;/span&gt; &lt;span class="nb"&gt;exit &lt;/span&gt;1&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="k"&gt;done&lt;/span&gt;

&lt;span class="c"&gt;# needed because of passing arguments to make command
&lt;/span&gt;&lt;span class="nl"&gt;%&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt;
    &lt;span class="p"&gt;@&lt;/span&gt;:
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now when you run &lt;code&gt;make build svc1&lt;/code&gt;, your service will be built with ARGS values defined in your repository's env files (or default ones if no env files are found).&lt;/p&gt;

&lt;h3&gt;
  
  
  Devcontainers in services
&lt;/h3&gt;

&lt;p&gt;Currently, you have configured Docker Compose in a way that makes it easy to run and rebuild your services. They can communicate with each other because they're in the same network. But this setup is mainly useful when you're not actively developing these services.&lt;/p&gt;

&lt;p&gt;To enable easy development, navigate to your service directory and create a &lt;code&gt;.devcontainer&lt;/code&gt; directory. For simplicity, I'll present a configuration for a Node application.&lt;/p&gt;

&lt;p&gt;Inside the directory, create two files:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;devcontainer.json&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Node"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"dockerComposeFile"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"./docker-compose.devcontainer.yaml"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"service"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"base"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"workspaceFolder"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"/workspaces/${localWorkspaceFolderBasename}"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"customizations"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"vscode"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"settings"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"remote.autoForwardPorts"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="kc"&gt;false&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"extensions"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
          &lt;/span&gt;&lt;span class="err"&gt;...&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;list&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;of&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;extensions&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;if&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;any...&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And &lt;code&gt;docker-compose.devcontainer.yaml&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;services&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;base&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;node:22.12.0&lt;/span&gt;
    &lt;span class="na"&gt;command&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;sleep infinity&lt;/span&gt;
    &lt;span class="na"&gt;hostname&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;svc1&lt;/span&gt;
    &lt;span class="na"&gt;networks&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;default&lt;/span&gt;
    &lt;span class="na"&gt;volumes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;../../:/workspaces:cached&lt;/span&gt;
    &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="c1"&gt;# these has to be the same, as in main docker compose&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;traefik.enable=true&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;traefik.http.services.svc1.loadbalancer.server.port=5173&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;traefik.http.routers.svc1.rule=(Host(`localhost`) || Host(`host.docker.internal`)) &amp;amp;&amp;amp; PathPrefix(`/svc1`)&lt;/span&gt;

&lt;span class="na"&gt;networks&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;shared-network&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;external&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And that's all! When you open this repository using a devcontainer, you'll have the whole environment ready for development. All routing to other services will work either using direct connection to other containers (for example &lt;code&gt;http://svc2:3000&lt;/code&gt;), or using Traefik (&lt;code&gt;http://host.docker.internal:3000/svc2&lt;/code&gt;). You will still have your local-development features, like hot-reload or debugging.&lt;/p&gt;

&lt;p&gt;Because the devcontainer shares the &lt;code&gt;shared-network&lt;/code&gt; with the services run by the main &lt;code&gt;compose.yml&lt;/code&gt;, you can connect from your devcontainer (&lt;code&gt;svc1&lt;/code&gt;) to other services (&lt;code&gt;svc2&lt;/code&gt;) in two main ways:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Direct Service-to-Service:&lt;/strong&gt; Use the service name defined in &lt;code&gt;compose.yml&lt;/code&gt; as the hostname (e.g., &lt;code&gt;fetch('http://svc2:8080/api/data')&lt;/code&gt;). This works because Docker's internal DNS resolves service names on the shared network. Use the internal port of the target service (e.g., &lt;code&gt;8080&lt;/code&gt; for &lt;code&gt;svc2&lt;/code&gt; in our example).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Via Traefik:&lt;/strong&gt; Access services through the Traefik proxy using the host machine's address and the path defined in the Traefik rule (e.g., &lt;code&gt;fetch('http://host.docker.internal:3000/svc2/api/data')&lt;/code&gt;). &lt;code&gt;host.docker.internal&lt;/code&gt; is a special DNS name that resolves to the host machine from within a Docker container. Traefik then routes the request based on the path (&lt;code&gt;/svc2&lt;/code&gt;) to the correct container and port.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;You will still have your local-development features, like hot-reload or debugging, within the devcontainer.&lt;/p&gt;

&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;p&gt;We've successfully created a development environment that allows teams to work efficiently on interconnected microservices without the usual friction:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Consistent environments across the team&lt;/strong&gt; - no more "works on my machine" issues or week-long setup processes.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Flexible service management&lt;/strong&gt; - choose which services to run based on your current task using profiles&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Easy switching between modes&lt;/strong&gt; - run services in containers when just consuming them, or switch to devcontainers when actively developing&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Simplified networking&lt;/strong&gt; - consistent service discovery regardless of whether you're running services or developing them&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Use this guide as a baseline, and extend the setup according to your specific needs.&lt;/p&gt;

</description>
      <category>productivity</category>
      <category>microservices</category>
      <category>docker</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>Scaling Next.js with Redis cache handler</title>
      <dc:creator>Rafał Szczepanik</dc:creator>
      <pubDate>Mon, 10 Jun 2024 13:08:22 +0000</pubDate>
      <link>https://forem.com/rafalsz/scaling-nextjs-with-redis-cache-handler-55lh</link>
      <guid>https://forem.com/rafalsz/scaling-nextjs-with-redis-cache-handler-55lh</guid>
      <description>&lt;p&gt;Let's say you have dozens of Next.js instances in production, running in your Kubernetes cluster. Most of your pages use &lt;a href="https://nextjs.org/docs/pages/building-your-application/data-fetching/incremental-static-regeneration" rel="noopener noreferrer"&gt;Incremental Static Regeneration&lt;/a&gt; (ISR), allowing pages to be generated and saved in file storage upon a user's first visit. Subsequent requests to the same page are served instantly from the saved version, bypassing regeneration, at least until the set revalidation period expires. Sounds good, right?&lt;/p&gt;

&lt;p&gt;Except it does not scale very well.&lt;/p&gt;

&lt;h2&gt;
  
  
  Problem
&lt;/h2&gt;

&lt;p&gt;The data is generated but never cleaned up. Moreover, every instance of NextJS uses the same data, duplicated and isolated. Here at &lt;a href="http://odrabiamy.pl/" rel="noopener noreferrer"&gt;Odrabiamy.pl&lt;/a&gt;, we noticed that all of our k8s instances were taking up to 30GB of storage each. That is a massive amount of data for one node, but what if we have 20 nodes? That would be 600 GB of data, which could easily be shared.&lt;/p&gt;

&lt;h2&gt;
  
  
  Possible Solutions
&lt;/h2&gt;

&lt;p&gt;We tried to come up with a solution to this problem, and these were our options:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Use a &lt;a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/" rel="noopener noreferrer"&gt;Kubernetes persistent volume&lt;/a&gt; and share the inside of the &lt;code&gt;.next&lt;/code&gt; directory&lt;/strong&gt;, but it has its cons:

&lt;ol&gt;
&lt;li&gt;Every pod would have read/write access, which could cause massive problems with race conditions between pods. We would have to &lt;a href="https://nextjs.org/docs/app/api-reference/next-config-js/incrementalCacheHandlerPath" rel="noopener noreferrer"&gt;write our own cache handler&lt;/a&gt; to make sure everything is stable.&lt;/li&gt;
&lt;li&gt;A mechanism would be needed to copy the &lt;code&gt;.next&lt;/code&gt; directory to a shared volume during deployment and, after it is not needed anymore, to delete it.&lt;/li&gt;
&lt;/ol&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;strong&gt;Use Redis and the existing Next.js config to store all the generated pages&lt;/strong&gt; - which turned out to be perfect for us in terms of the required time to implement and the complexity of the solution.&lt;/li&gt;

&lt;/ol&gt;

&lt;h2&gt;
  
  
  Next.js and Redis
&lt;/h2&gt;

&lt;p&gt;By default, Next.js uses a file-based cache handler. However, Vercel has published a new config option to customize that. To do this, we have to load a custom cache handler in our &lt;code&gt;next.config.js&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight jsx"&gt;&lt;code&gt;  &lt;span class="nx"&gt;cacheHandler&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="nx"&gt;process&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;env&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;NODE_ENV&lt;/span&gt; &lt;span class="o"&gt;===&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;production&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;
      &lt;span class="p"&gt;?&lt;/span&gt; &lt;span class="nx"&gt;require&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;resolve&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;./cache-handler.cjs&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
      &lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;undefined&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We only load it in the production environment, as it isn’t necessary in development mode. Now it is time to implement the &lt;code&gt;cache-handler.cjs&lt;/code&gt; file. (Note: depending on your npm config, you might need to write this using ES modules.)&lt;/p&gt;

&lt;p&gt;We will utilize the &lt;a href="https://caching-tools.github.io/next-shared-cache" rel="noopener noreferrer"&gt;@neshca/cache-handler&lt;/a&gt; package, which is a library that comes with pre-written handlers. The plan is to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Set Redis as the primary cache handler&lt;/li&gt;
&lt;li&gt;As a backup, use LRU cache (Least Recently Used, in-memory cache)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The basic implementation will be as follows:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight jsx"&gt;&lt;code&gt;&lt;span class="c1"&gt;// cache-handler.cjs&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;createClient&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;require&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;redis&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nx"&gt;createClient&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;CacheHandler&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;require&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;@neshca/cache-handler&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nx"&gt;CacheHandler&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;createLruCache&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;require&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;@neshca/cache-handler/local-lru&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="k"&gt;default&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;createRedisCache&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;require&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;@neshca/cache-handler/redis-strings&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="k"&gt;default&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="nx"&gt;CacheHandler&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;onCreation&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;async &lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;localCache&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;createLruCache&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
    &lt;span class="na"&gt;maxItemsNumber&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;10000&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;maxItemSizeBytes&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;1024&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="mi"&gt;1024&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="mi"&gt;250&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="c1"&gt;// Limit to 250 MB&lt;/span&gt;
  &lt;span class="p"&gt;});&lt;/span&gt;

  &lt;span class="kd"&gt;let&lt;/span&gt; &lt;span class="nx"&gt;redisCache&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;!&lt;/span&gt;&lt;span class="nx"&gt;process&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;env&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;REDIS_URL&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;warn&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;REDIS_URL env is not set, using local cache only.&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;else&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;try&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;client&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;createClient&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
        &lt;span class="na"&gt;url&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;process&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;env&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;REDIS_URL&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="p"&gt;});&lt;/span&gt;

      &lt;span class="nx"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;on&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;error&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;error&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;error&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Redis error&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;error&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
      &lt;span class="p"&gt;});&lt;/span&gt;

      &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;connect&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;

      &lt;span class="nx"&gt;redisCache&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;createRedisCache&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
        &lt;span class="nx"&gt;client&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="na"&gt;keyPrefix&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;`next-shared-cache-&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;process&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;env&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;NEXT_PUBLIC_BUILD_NUMBER&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;:`&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="c1"&gt;// timeout for the Redis client operations like `get` and `set`&lt;/span&gt;
        &lt;span class="c1"&gt;// after this timeout, the operation will be considered failed and the `localCache` will be used&lt;/span&gt;
        &lt;span class="na"&gt;timeoutMs&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;5000&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="p"&gt;});&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;catch &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;error&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
        &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Failed to initialize Redis cache, using local cache only.&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="nx"&gt;error&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;

  &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="na"&gt;handlers&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;redisCache&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;localCache&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
    &lt;span class="na"&gt;ttl&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="c1"&gt;// This value is also used as revalidation time for every ISR site&lt;/span&gt;
      &lt;span class="na"&gt;defaultStaleAge&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;process&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;env&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;NEXT_PUBLIC_CACHE_IN_SECONDS&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; 
      &lt;span class="c1"&gt;// This makes sure, that resources without set revalidation time aren't stored infinitely in Redis&lt;/span&gt;
      &lt;span class="na"&gt;estimateExpireAge&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;staleAge&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nx"&gt;staleAge&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; 
    &lt;span class="p"&gt;},&lt;/span&gt;
  &lt;span class="p"&gt;};&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;

&lt;span class="nx"&gt;module&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;exports&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;CacheHandler&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;But here is one interesting caveat. What if Redis isn’t available during server start? The line &lt;code&gt;await client.connect();&lt;/code&gt; will fail, and the page will load with a delay. But because of this, Next.js will try to initialize a new CacheHandler every time someone visits any page.&lt;/p&gt;

&lt;p&gt;That is why we decided to use only LRU in such cases. However, the solution to this problem is not trivial, as &lt;code&gt;createClient&lt;/code&gt; doesn’t throw errors; it operates only on callbacks. So a workaround is needed:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight jsx"&gt;&lt;code&gt;      &lt;span class="p"&gt;...&lt;/span&gt;
      &lt;span class="kd"&gt;let&lt;/span&gt; &lt;span class="nx"&gt;isReady&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

      &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;client&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;createClient&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
        &lt;span class="na"&gt;url&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;process&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;env&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;REDIS_URL&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="na"&gt;socket&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
          &lt;span class="na"&gt;reconnectStrategy&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;isReady&lt;/span&gt; &lt;span class="p"&gt;?&lt;/span&gt; &lt;span class="mi"&gt;5000&lt;/span&gt; &lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
        &lt;span class="p"&gt;},&lt;/span&gt;
      &lt;span class="p"&gt;)};&lt;/span&gt;

      &lt;span class="nx"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;on&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;error&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;error&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;error&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Redis error&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;error&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
      &lt;span class="p"&gt;});&lt;/span&gt;

      &lt;span class="nx"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;on&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;ready&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="nx"&gt;isReady&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
      &lt;span class="p"&gt;});&lt;/span&gt;

      &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;connect&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;

      &lt;span class="p"&gt;...&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This ensures that Next.js will not try to reconnect if the initial connection fails. In other cases, reconnection is desired and works like a charm.&lt;/p&gt;

&lt;h2&gt;
  
  
  Performance and stability
&lt;/h2&gt;

&lt;p&gt;Our performance tests showed that CPU usage increased by about 2%, but response times stayed the same.&lt;/p&gt;

&lt;p&gt;At Odrabiamy, our goal is not only to have a performant solution but also to have independent infrastructure layers, so that any failure does not influence the functioning of the entire application. This is where the Least Recently Used (LRU) cache comes into play as a crucial fallback mechanism. During our performance tests, we manually terminated Redis multiple times, which resulted in &lt;strong&gt;zero downtime&lt;/strong&gt;. The transition between Redis and the LRU cache was so seamless that it wasn’t even noticeable in our performance graphs.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;In the case of multiple Next.js instances running on a Kubernetes cluster, it is worth considering replacing the default file-system based cache with a Redis one. This can free up your storage resources without any risks and performance downgrades. Setting this configuration up is quite easy and was already battle-tested in our production environment.&lt;/p&gt;

</description>
      <category>javascript</category>
      <category>nextjs</category>
      <category>performance</category>
      <category>tutorial</category>
    </item>
  </channel>
</rss>
