Hi there! I'm Shrijith Venkatrama, founder of Hexmos. Right now, I’m building LiveAPI, a first of its kind tool for helping you automatically index API endpoints across all your repositories. LiveAPI helps you discover, understand and use APIs in large tech infrastructures with ease.
Docker Compose is a powerful tool for orchestrating multi-container applications, especially for on-premises Software-as-a-Service (SaaS) setups. It simplifies defining, running, and managing services, networks, and volumes in a single YAML file. But when you’re building robust on-prem SaaS systems, you need more than basic setups. You need advanced patterns to handle scalability, security, and maintainability while keeping things developer-friendly.
This article dives into practical, advanced Docker Compose patterns tailored for on-prem SaaS. We’ll cover real-world examples, complete code snippets, and tips to make your deployments resilient and efficient. Let’s get started.
1. Structuring Multi-Service Compose Files for Clarity
A clean Docker Compose file is critical for on-prem SaaS, where you’re juggling multiple services like APIs, databases, and workers. Splitting services logically and using YAML anchors prevents messy, unmaintainable configs.
Instead of a single bloated file, organize services by function (e.g., frontend, backend, database) and use anchors to reuse common configurations. This reduces duplication and makes updates easier.
Example: Modular Compose with Anchors
Here’s a sample setup for a SaaS app with a web server, database, and background worker.
version: '3.8'
x-common: &common-config
logging:
driver: json-file
options:
max-size: "10m"
max-file: "3"
services:
web:
<<: *common-config
image: my-saas-app:web-latest
build:
context: ./web
dockerfile: Dockerfile
ports:
- "8080:8080"
environment:
- DB_HOST=db
- REDIS_HOST=redis
depends_on:
- db
- redis
networks:
- app-network
db:
<<: *common-config
image: postgres:15
environment:
POSTGRES_USER: saasuser
POSTGRES_PASSWORD: securepassword
POSTGRES_DB: saasdb
volumes:
- db-data:/var/lib/postgresql/data
networks:
- app-network
redis:
<<: *common-config
image: redis:7
networks:
- app-network
worker:
<<: *common-config
image: my-saas-app:worker-latest
build:
context: ./worker
dockerfile: Dockerfile
environment:
- DB_HOST=db
- REDIS_HOST=redis
depends_on:
- db
- redis
networks:
- app-network
volumes:
db-data:
networks:
app-network:
driver: bridge
What this does: The x-common
anchor applies consistent logging settings across services. Each service (web, db, redis, worker) is clearly defined, with dependencies and networking configured. Run docker-compose up -d
to start it. You’ll see containers spin up, connected via the app-network
.
Tip: Use meaningful service names and avoid generic ones like “app” to make debugging easier. For more on YAML anchors, check Docker’s official docs.
2. Scaling Services for High Availability
On-prem SaaS often needs horizontal scaling to handle load without cloud autoscaling. Docker Compose supports scaling via the --scale
flag, but you need to plan for load balancing and state management.
Use a reverse proxy like Nginx or Traefik to distribute traffic across scaled instances. For stateful services like databases, scaling is trickier—stick to single instances or use dedicated clustering (e.g., PostgreSQL replication outside Compose).
Example: Scaling Web Service with Nginx
Here’s how to scale the web service and add Nginx as a load balancer.
version: '3.8'
services:
web:
image: my-saas-app:web-latest
environment:
- DB_HOST=db
depends_on:
- db
networks:
- app-network
deploy:
replicas: 3
nginx:
image: nginx:latest
volumes:
- ./nginx.conf:/etc/nginx/nginx.conf:ro
ports:
- "80:80"
depends_on:
- web
networks:
- app-network
db:
image: postgres:15
environment:
POSTGRES_USER: saasuser
POSTGRES_PASSWORD: securepassword
POSTGRES_DB: saasdb
volumes:
- db-data:/var/lib/postgresql/data
networks:
- app-network
volumes:
db-data:
networks:
app-network:
driver: bridge
nginx.conf:
worker_processes 1;
events {
worker_connections 1024;
}
http {
upstream web_backend {
server web:8080;
}
server {
listen 80;
location / {
proxy_pass http://web_backend;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
}
}
}
What this does: The deploy.replicas: 3
scales the web service to three instances. Nginx balances traffic across them. Run docker-compose up -d --scale web=3
to start. You’ll see three web containers and one Nginx container. Access the app via http://localhost
.
Tip: Use healthcheck
in Compose to ensure only healthy containers receive traffic. See Nginx load balancing docs for advanced configs.
3. Managing Secrets Securely
On-prem SaaS deployments need secure secret management to protect sensitive data like API keys or database credentials. Docker Compose supports secrets via files or environment variables, but for on-prem, file-based secrets are safer to avoid exposing secrets in docker inspect
.
Example: Using Docker Secrets
Here’s how to use file-based secrets for a database password.
version: '3.8'
services:
web:
image: my-saas-app:web-latest
environment:
- DB_HOST=db
- DB_PASSWORD_FILE=/run/secrets/db_password
secrets:
- db_password
depends_on:
- db
networks:
- app-network
db:
image: postgres:15
environment:
POSTGRES_USER: saasuser
POSTGRES_PASSWORD_FILE: /run/secrets/db_password
POSTGRES_DB: saasdb
secrets:
- db_password
volumes:
- db-data:/var/lib/postgresql/data
networks:
- app-network
secrets:
db_password:
file: ./db_password.txt
volumes:
db-data:
networks:
app-network:
driver: bridge
Create a secret file:
echo "securepassword123" > db_password.txt
What this does: The db_password
secret is mounted as a file at /run/secrets/db_password
. The app and database read it securely. Run docker-compose up -d
to start. The secret is never exposed in environment variables.
Tip: Restrict file permissions (chmod 600 db_password.txt
) to prevent unauthorized access. Learn more in Docker’s secrets guide.
4. Optimizing Resource Usage
On-prem hardware has finite resources, so resource limits in Docker Compose are crucial to prevent one service from starving others. Use deploy.resources
to set CPU and memory limits.
Example: Resource-Constrained Services
Here’s how to limit resources for a web and worker service.
version: '3.8'
services:
web:
image: my-saas-app:web-latest
deploy:
resources:
limits:
cpus: '0.5'
memory: 512M
reservations:
cpus: '0.25'
memory: 256M
environment:
- DB_HOST=db
depends_on:
- db
networks:
- app-network
worker:
image: my-saas-app:worker-latest
deploy:
resources:
limits:
cpus: '0.75'
memory: 1G
reservations:
cpus: '0.5'
memory: 512M
environment:
- DB_HOST=db
depends_on:
- db
networks:
- app-network
db:
image: postgres:15
environment:
POSTGRES_USER: saasuser
POSTGRES_PASSWORD: securepassword
POSTGRES_DB: saasdb
volumes:
- db-data:/var/lib/postgresql/data
networks:
- app-network
volumes:
db-data:
networks:
app-network:
driver: bridge
What this does: The web service is capped at 0.5 CPU cores and 512MB memory, while the worker gets 0.75 cores and 1GB. Run docker-compose up -d
to apply. Check resource usage with docker stats
.
Tip: Monitor resource usage and adjust limits based on load testing. Docker’s resource constraints docs have more details.
5. Handling Persistent Data with Volumes
SaaS apps need persistent storage for databases, user uploads, or logs. Docker Compose volumes ensure data survives container restarts. Use named volumes for portability and bind mounts for local development.
Example: Named Volumes for Database
Here’s a setup with a named volume for PostgreSQL.
version: '3.8'
services:
web:
image: my-saas-app:web-latest
environment:
- DB_HOST=db
depends_on:
- db
networks:
- app-network
db:
image: postgres:15
environment:
POSTGRES_USER: saasuser
POSTGRES_PASSWORD: securepassword
POSTGRES_DB: saasdb
volumes:
- db-data:/var/lib/postgresql/data
networks:
- app-network
volumes:
db-data:
name: saas-db-data
networks:
app-network:
driver: bridge
What this does: The db-data
volume persists PostgreSQL data at /var/lib/postgresql/data
. Run docker-compose up -d
to start. If the container stops, data remains in the saas-db-data
volume (check with docker volume ls
).
Tip: Back up volumes regularly using docker volume
commands or external tools. See Docker’s volume docs for advanced options.
6. Implementing Health Checks for Reliability
Health checks ensure services are ready before dependent services start. This prevents issues like a web app trying to connect to an unready database. Docker Compose’s healthcheck
directive lets you define custom checks.
Example: Database Health Check
Here’s a health check for PostgreSQL.
version: '3.8'
services:
web:
image: my-saas-app:web-latest
environment:
- DB_HOST=db
depends_on:
db:
condition: service_healthy
networks:
- app-network
db:
image: postgres:15
environment:
POSTGRES_USER: saasuser
POSTGRES_PASSWORD: securepassword
POSTGRES_DB: saasdb
healthcheck:
test: ["CMD-SHELL", "pg_isready -U saasuser"]
interval: 10s
timeout: 5s
retries: 5
volumes:
- db-data:/var/lib/postgresql/data
networks:
- app-network
volumes:
db-data:
networks:
app-network:
driver: bridge
What this does: The db
service runs pg_isready
every 10 seconds to check if PostgreSQL is ready. The web
service waits until the db
is healthy (service_healthy
). Run docker-compose up -d
to start. Check health with docker inspect <container_id>
.
Tip: Tailor health checks to your app’s needs (e.g., HTTP endpoints for web services). Docker’s healthcheck docs explain more.
7. Isolating Environments with Profiles
On-prem SaaS often requires separate environments (e.g., dev, staging, prod) on the same hardware. Docker Compose profiles let you define environment-specific services in one file, activated with the --profile
flag.
Example: Dev and Prod Profiles
Here’s a Compose file with profiles for development and production.
version: '3.8'
services:
web:
image: my-saas-app:web-latest
environment:
- DB_HOST=db
depends_on:
- db
networks:
- app-network
profiles:
- prod
web-dev:
image: my-saas-app:web-dev
environment:
- DB_HOST=db
- DEBUG=true
depends_on:
- db
ports:
- "8080:8080"
networks:
- app-network
profiles:
- dev
db:
image: postgres:15
environment:
POSTGRES_USER: saasuser
POSTGRES_PASSWORD: securepassword
POSTGRES_DB: saasdb
volumes:
- db-data:/var/lib/postgresql/data
networks:
- app-network
profiles:
- dev
- prod
volumes:
db-data:
networks:
app-network:
driver: bridge
What this does: The web
service runs in prod
profile, while web-dev
runs in dev
with debugging enabled. The db
service runs in both. Run docker-compose --profile dev up -d
for development or docker-compose --profile prod up -d
for production.
Tip: Use profiles to toggle monitoring or logging services. Check Docker’s profiles docs for details.
8. Streamlining Updates with Zero-Downtime Deployments
On-prem SaaS needs zero-downtime updates to avoid disrupting users. Docker Compose supports rolling updates with the deploy
section, controlling how containers are replaced.
Example: Rolling Updates
Here’s a setup for zero-downtime web service updates.
version: '3.8'
services:
web:
image: my-saas-app:web-latest
environment:
- DB_HOST=db
depends_on:
- db
networks:
- app-network
deploy:
update_config:
parallelism: 1
delay: 10s
failure_action: rollback
restart_policy:
condition: on-failure
max_attempts: 3
db:
image: postgres:15
environment:
POSTGRES_USER: saasuser
POSTGRES_PASSWORD: securepassword
POSTGRES_DB: saasdb
volumes:
- db-data:/var/lib/postgresql/data
networks:
- app-network
volumes:
db-data:
networks:
app-network:
driver: bridge
What this does: The update_config
replaces one container at a time with a 10-second delay. If an update fails, it rolls back. Run docker-compose up -d
with a new image tag to trigger. Monitor with docker-compose ps
.
Tip: Combine with a load balancer (like Nginx) for seamless updates. See Docker’s deployment docs for more.
What’s Next: Building Robust On-Prem SaaS
These patterns—modular configs, scaling, secrets, resource limits, volumes, health checks, profiles, and rolling updates—form a solid foundation for on-prem SaaS with Docker Compose. Start by structuring your Compose file clearly and incrementally add features like health checks or secrets as needed. Test each pattern in a staging environment to catch issues early.
For larger deployments, consider pairing Compose with tools like Docker Swarm or Kubernetes for advanced orchestration. Keep monitoring resource usage and logs to optimize performance. With these patterns, you’re well-equipped to deliver reliable, scalable, and secure SaaS on your own hardware.
Top comments (1)
I found the use of YAML anchors to share common configurations across services especially insightful—it's a smart way to keep Docker Compose files clean and maintainable as complexity grows!