DEV Community

Anastasiia Kim
Anastasiia Kim

Posted on

3

Essential Dev Environments with Docker Compose

At the beginning of my learning path of backend development I ran into problems trying to run a database or external service. The setting up a required environment on a local machine made me frustrated, because of configurations, missing dependencies or mismatched versions.

This is where Docker comes in. Docker is a convenient and fast tool for developing and deployment. It lets you configure and run services in isolated containers. Even if you need to run multiple services at once, Docker Compose makes it easier.

For example:
You’re building an API that uses PostgreSQL, Kafka and Redis. Docker Compose helps you run it fast and smoothly.

You could also use Dockerfile to build and run services. In Dockerfile you add commands and multiple containers. However, using Docker Compose is easier to configure and more human-readable than Dockerfile. Especially In the case of running multiple services, like a backend, database and message broker.

Before running any Docker commands, ensure Docker and Docker Compose are installed on your machine.

PostgreSQL image

Let’s start with a PostgreSQL image.
First, create a docker-compose.yml file in the root folder of your project. There is a basic setup to run the Postgres container.

version: ‘3.8’

services:
 database:     
   image: postgres:latest   
   restart: unless-stopped  
   environment:  
     - POSTGRES_USER=user   
     - POSTGRES_PASSWORD=12345
   ports:   
     - "5432:5432" 
   volumes: 
     - ./pgdata:/var/lib/postgresql/data
Enter fullscreen mode Exit fullscreen mode

In this basic setup we have:

  • services - A section that declares all containers you want to run.
  • database - the name of the service. You can put any name you like.
  • image: postgres:latest - Specifies the Docker image to use (in this case, the latest official PostgreSQL).
  • restart: unless-stopped - Ensure the container restarts automatically until you stop it manually.
  • environment - Set environment variables inside the container. Set POSTGRES_USER=user and POSTGRES_PASSWORD=12345. You can add more variables. To avoid hardcoding secrets it’s better to use .env.
  • ports - container ports. We mapped the ports of the container to host. Before the colon : is the host machine port and the after : is the port of the container. Default port 5432
  • volumes - Persists data by mapping local directory to the container’s data directory.

This setup gives you a fully functional local PostgreSQL server with persistent storage.

Redis image

Redis container with basic setup:

services:
 redis:
   image: redis:latest
   restart: unless-stopped
   ports:
     - "6379:6379"
   volumes:
     - ./redis-data:/data
Enter fullscreen mode Exit fullscreen mode

Redis can be configured by providing a Redis configuration file redis.conf, but this basic setup is able to start too.

Kafka image

Kafka container with basic setup:

services:
 kafka:
   image: apache/kafka:latest
   ports:
     - "9092:9092"
   environment:
     KAFKA_NODE_ID: 1
     KAFKA_PROCESS_ROLES: broker,controller
     KAFKA_LISTENERS: PLAINTEXT://localhost:9092,CONTROLLER://localhost:9093
     KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://localhost:9092
     KAFKA_CONTROLLER_LISTENER_NAMES: CONTROLLER
     KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: CONTROLLER:PLAINTEXT,PLAINTEXT:PLAINTEXT
     KAFKA_CONTROLLER_QUORUM_VOTERS: 1@localhost:9093
     KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
     KAFKA_TRANSACTION_STATE_LOG_REPLICATION_FACTOR: 1
     KAFKA_TRANSACTION_STATE_LOG_MIN_ISR: 1
     KAFKA_GROUP_INITIAL_REBALANCE_DELAY_MS: 0
     KAFKA_NUM_PARTITIONS: 3
Enter fullscreen mode Exit fullscreen mode

Let's take a closer look at the environment variables. In this case we use KRaft mode and single node cluster:

  • KAFKA_NODE_ID: 1 - ID of this Kafka node
  • KAFKA_PROCESS_ROLES: broker,controller - Node acts as both message broker and cluster controller
  • KAFKA_LISTENERS: PLAINTEXT://localhost:9092,CONTROLLER://localhost:9093 - Listener ports for clients and controller
  • KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://localhost:9092 - How Kafka advertises itself to clients
  • KAFKA_CONTROLLER_LISTENER_NAMES: CONTROLLER - Designates which listener is used for controller traffic
  • KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: CONTROLLER:PLAINTEXT,PLAINTEXT:PLAINTEXT - Listener protocols
  • KAFKA_CONTROLLER_QUORUM_VOTERS: 1@localhost:9093 - Controller quorum (just this node)
  • KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1 - Internal topic for consumer offsets
  • KAFKA_TRANSACTION_STATE_LOG_REPLICATION_FACTOR: 1 - Transaction log replication (single-node safe)
  • KAFKA_TRANSACTION_STATE_LOG_MIN_ISR: 1 - Minimum in-sync replicas for transactions
  • KAFKA_GROUP_INITIAL_REBALANCE_DELAY_MS: 0 - Faster consumer group rebalancing
  • KAFKA_NUM_PARTITIONS: 3 - Default number of partitions for new topics

Running and Managing Services

After saving the docker compose file, open the terminal in the root folder of your project and run the next command to build a docker container.

docker compose build
Enter fullscreen mode Exit fullscreen mode

Then start the container:

docker compose up
Enter fullscreen mode Exit fullscreen mode

You will see logs in the terminal if Docker is built and run successfully.

If you prefer to run a docker container in the background, add flag -d (detached mode) in the command

docker compose up -d
Enter fullscreen mode Exit fullscreen mode

Docker will run in the background and the terminal will not be blocked by logs. To stop the container on the background, use this command:

docker compose down
Enter fullscreen mode Exit fullscreen mode

If you want to see all docker containers which running in the background, use the following command:

docker ps
Enter fullscreen mode Exit fullscreen mode

If you want to make changes to your containers (change port or set more environment variables) you need to:

  • stop Docker (Ctrl+c),

  • edit the docker-compose file,

  • save then restart the containers with:

docker compose up –build
Enter fullscreen mode Exit fullscreen mode

The flag --build applies changes you’ve made.

Logs and Debugging

Even if your docker compose file is perfect, things may go wrong (fail to connect, service crash), the first step is check logs.
When you run docker compose up the terminal shows all logs. But you can check logs of specific service if you run command:

docker compose logs redis
Enter fullscreen mode Exit fullscreen mode

To follow the logs live:

docker compose logs -f redis
Enter fullscreen mode Exit fullscreen mode

If logs are not enough, you can exec into a container and look around manually:

docker exec -it redis sh
Enter fullscreen mode Exit fullscreen mode

And also to see all running and exited containers, run next command:

docker ps -a
Enter fullscreen mode Exit fullscreen mode

If you want to add more environment variables or other settings to your services, usually you can find documentation in Docker Hub. Just input a name of service in the search field and check the documentation section.

Combining Multiple Services

You can define all services in a single docker compose file. Postgres + Kafka + Redis:

 services:
    kafka:
        
     postgres:  
         
    redis:
        
Enter fullscreen mode Exit fullscreen mode

Final Tips

  • You always can find more configuration options and examples on Docker Hub

  • Check each image’s documentation for available environment variables

  • For complex projects, it’s better practice to use .env for secrets

Links and Resources

Docker:

PostgreSQL:

Redis:

Kafka:

Heroku

Built for developers, by developers.

Whether you're building a simple prototype or a business-critical product, Heroku's fully-managed platform gives you the simplest path to delivering apps quickly — using the tools and languages you already love!

Learn More

Top comments (0)

ACI image

ACI.dev: Fully Open-source AI Agent Tool-Use Infra (Composio Alternative)

100% open-source tool-use platform (backend, dev portal, integration library, SDK/MCP) that connects your AI agents to 600+ tools with multi-tenant auth, granular permissions, and access through direct function calling or a unified MCP server.

Check out our GitHub!