Docker Compose: Orchestrating Multi-Container Applications
Define and run multi-container Docker applications using Docker Compose. From local development environments to complex microservice topologies.
Docker Compose: Orchestrating Multi-Container Applications
Docker Compose turns a multi-container application into a single deployable unit. Instead of running a dozen docker commands to start your stack, you write a YAML file describing your services, networks, and volumes, then run docker-compose up.
This tutorial covers Docker Compose from basics to advanced usage: service definitions, networking, environment variables, scaling, and production considerations.
When to Use Docker Compose / When to Graduate to Kubernetes
Docker Compose excels in specific scenarios but has natural limits. Understanding when to use each tool helps you architect appropriately for your scale.
Use Docker Compose when:
- Local development — spinning up the full stack on your machine with hot reload and debugging
- CI/CD testing — running integration tests in isolated containers without external dependencies
- Single-host deployment — deploying a complete stack to a single VM or dedicated server
- Prototyping — rapidly iterating on architecture without infrastructure complexity
- Small to medium workloads — managing under 20 containers with straightforward networking
Signs you are outgrowing Compose:
- Horizontal scaling limits —
docker-compose up --scaledoes not provide automatic load balancing or self-healing - Multi-host networking — Compose networking is host-local; communication between hosts requires additional tooling
- Rolling updates — manual image updates with downtime or complex blue-green scripts
- Service discovery at scale — DNS-based discovery works for handfuls of services, not hundreds
- Resource isolation — no built-in CPU/memory limits across the entire stack, only per-container
Graduate to Kubernetes when:
- Multiple nodes required — workloads exceed what a single host can handle or you need high availability
- Enterprise compliance — role-based access control, network policies, and audit logs are requirements
- Complex CI/CD pipelines — automated rollouts, rollbacks, and canary deployments are daily operations
- Service mesh needs — traffic management, circuit breaking, and observability beyond basic health checks
- Multi-environment consistency — same manifests across dev, staging, production with different configurations
Compose File Structure
A docker-compose.yml file describes your entire application stack:
version: "3.8"
services:
web:
build: .
ports:
- "3000:3000"
environment:
- NODE_ENV=production
depends_on:
- db
- redis
db:
image: postgres:15-alpine
volumes:
- postgres_data:/var/lib/postgresql/data
environment:
POSTGRES_DB: app
POSTGRES_USER: user
POSTGRES_PASSWORD: secret
redis:
image: redis:7-alpine
volumes:
- redis_data:/data
volumes:
postgres_data:
redis_data:
networks:
default:
driver: bridge
The version field specifies the Compose file format version. 3.8 is current for most use cases.
Service Dependency Topology
The example stack from the YAML above has the following dependency graph:
graph LR
User([Browser]) --> Web[Web Service<br/>:3000]
Web --> API[API Service<br/>:4000]
API --> DB[(PostgreSQL<br/>:5432)]
API --> Redis[(Redis<br/>:6379)]
Each arrow represents a network connection. Docker Compose automatically creates a shared bridge network where services can reach each other by service name.
Service Definitions and Dependencies
Each entry under services is a container. Docker Compose builds or pulls the image, then starts the container with the specified configuration.
Building from Dockerfile
services:
web:
build:
context: .
dockerfile: Dockerfile
args:
NODE_ENV: production
image: myapp:latest
The build instruction tells Compose to build the image from a Dockerfile. The image instruction names the resulting image. If you omit image, Compose names it projectname_web.
Using Pre-Built Images
services:
db:
image: postgres:15-alpine
Docker pulls the image if not present locally.
depends_on
The depends_on directive ensures services start in the correct order:
services:
web:
build: .
depends_on:
- db
- redis
api:
build: ./api
depends_on:
- db
db:
image: postgres:15-alpine
Compose starts db first, then web and api in parallel. The directive does not wait for the database to be ready, only for the container to start. For databases and similar services, implement application-level retry logic or use healthcheck with condition.
Health Checks and Conditions
services:
db:
image: postgres:15-alpine
healthcheck:
test: ["CMD-SHELL", "pg_isready -U user -d app"]
interval: 5s
timeout: 3s
retries: 5
web:
build: .
depends_on:
db:
condition: service_healthy
Now web waits for db to be healthy, not just started.
Networking with Compose
Compose creates a default network for your stack. All services join this network and can reach each other by service name.
Automatic DNS Resolution
services:
web:
build: .
environment:
- DATABASE_URL=postgres://db:5432/app
db:
image: postgres:15-alpine
The web service can reach db at postgres://db:5432/app. Docker embedded a DNS resolver that resolves service names to container IPs.
Custom Networks
For more control, define explicit networks:
services:
frontend:
build: ./frontend
networks:
- frontend_net
backend:
build: ./backend
networks:
- frontend_net
- backend_net
db:
image: postgres:15-alpine
networks:
- backend_net
networks:
frontend_net:
driver: bridge
backend_net:
driver: bridge
The frontend can reach the backend, and the backend can reach the database. The frontend cannot reach the database directly. This segmentation adds security.
External Networks
Use an existing network instead of creating one:
networks:
default:
external: true
name: my_pre-existing_network
Environment Variables and Secrets
Compose provides several ways to inject configuration into services.
Basic Environment Variables
services:
web:
environment:
- NODE_ENV=production
- API_KEY=secret123
- DEBUG=false
Environment from .env File
Create a .env file in the same directory as docker-compose.yml:
NODE_ENV=production
API_KEY=secret123
DATABASE_URL=postgres://user:pass@db:5432/app
Reference variables in docker-compose.yml:
services:
web:
environment:
- NODE_ENV=${NODE_ENV}
- API_KEY=${API_KEY}
- DATABASE_URL=${DATABASE_URL}
Secrets in Docker Swarm
For production secrets, use Docker secrets (requires Swarm mode):
services:
db:
image: postgres:15-alpine
secrets:
- db_password
environment:
POSTGRES_PASSWORD_FILE: /run/secrets/db_password
secrets:
db_password:
file: ./secrets/db_password.txt
The secret file content gets mounted at /run/secrets/db_password inside the container. The file never appears in environment variables.
Security Note
Never commit .env files with real secrets to version control. Add them to .gitignore. For CI/CD, inject secrets from your pipeline’s secret management system.
Development vs Production Workflows
Compose files often differ between development and production.
Development Compose File
# docker-compose.yml
services:
web:
build: .
volumes:
- ./src:/app/src:ro # Hot reload
environment:
- NODE_ENV=development
ports:
- "3000:3000"
- "9229:9229" # Debug port
Mounting source code as a volume enables hot reload. Changes on your host appear immediately inside the container.
Production Compose File
# docker-compose.prod.yml
services:
web:
build:
context: .
dockerfile: Dockerfile.prod
environment:
- NODE_ENV=production
ports:
- "3000:3000"
restart: unless-stopped
healthcheck:
test:
[
"CMD",
"wget",
"--no-verbose",
"--tries=1",
"--spider",
"http://localhost:3000/health",
]
interval: 30s
timeout: 3s
retries: 3
start_period: 10s
No volumes for source code (code is baked into the image). Explicit health checks. Restart policy.
Using Multiple Compose Files
# Start with both files (base + override)
docker-compose up -d
# Use production file instead of development
docker-compose -f docker-compose.yml -f docker-compose.prod.yml up -d
The prod file overrides settings from the base file. Build context, environment variables, and volumes get replaced.
Scaling Services
Compose can run multiple replicas of a service:
docker-compose up -d --scale web=3
This runs 3 instances of the web service. However, port mapping becomes tricky with multiple replicas since you cannot map the same host port to multiple containers.
Scaling with a Load Balancer
For actual load balancing, use a tool like nginx or Traefik in front of your scaled services:
services:
web:
build: .
scale: 3
nginx:
image: nginx:alpine
ports:
- "80:80"
volumes:
- ./nginx.conf:/etc/nginx/nginx.conf:ro
depends_on:
- web
Each web container gets a unique name (web_1, web_2, web_3). Configure nginx to balance across web_1, web_2, web_3.
Docker Compose vs Kubernetes
Compose is excellent for local development and single-host deployment. For production at scale, Kubernetes handles automatic load balancing, rolling updates, and self-healing. The concepts translate, but the tooling differs significantly.
Common Commands
Starting Your Stack
# Start all services
docker-compose up -d
# Start and rebuild if images are outdated
docker-compose up -d --build
# Start specific services
docker-compose up -d web db
# Start with override file
docker-compose -f docker-compose.yml -f docker-compose.dev.yml up -d
Viewing Logs
# Follow logs from all services
docker-compose logs -f
# Follow logs from specific service
docker-compose logs -f web
# Tail logs with timestamps
docker-compose logs -f --tail=100 --timestamps web
Checking Status
# List running services
docker-compose ps
# List images used by services
docker-compose images
# Inspect service configuration
docker-compose config
Stopping and Cleaning Up
# Stop services (containers remain)
docker-compose stop
# Stop and remove containers
docker-compose down
# Stop and remove containers and volumes
docker-compose down -v
# Stop and remove everything including images
docker-compose down --rmi local
Executing Commands in Services
# Run a command in a service
docker-compose exec web node --version
# Run with interactive shell
docker-compose exec web sh
# Run database migrations
docker-compose exec api npm run migrate
Building Multi-Architecture Images
For services that need to run on different architectures:
services:
web:
build:
context: .
dockerfile: Dockerfile
platforms:
- linux/amd64
- linux/arm64
Use Docker buildx for the actual build:
docker buildx bake -f docker-compose.yml
docker buildx push --platforms linux/amd64,linux/arm64 myapp:latest
Extending Compose Files
Compose supports extending services from other files:
# docker-compose.yml (base)
services:
web:
build: .
environment:
- NODE_ENV=${NODE_ENV}
# docker-compose.dev.yml (extends base)
services:
web:
volumes:
- ./src:/app/src:ro
ports:
- "3000:3000"
Run with:
docker-compose -f docker-compose.yml -f docker-compose.dev.yml up -d
The dev file adds volumes and ports to the base service without duplicating the build configuration.
Troubleshooting
Service Fails to Start
# Check logs first
docker-compose logs web
# Verify service configuration
docker-compose config
# Recreate containers
docker-compose up -d --force-recreate
Port Conflicts
# Check what is using the port
ss -tlnp | grep 3000
# Find containers using the port
docker-compose ps -a | grep 3000
Volume Permission Issues
Containers often run as non-root. If your application cannot read a mounted volume:
# Check current ownership
ls -la ./data
# Fix ownership in a temporary container
docker-compose run --rm alpine chown -R 1001:1001 /data
DNS Resolution Failures
# Check if container can resolve names
docker-compose exec web ping api
# Check DNS configuration
docker-compose exec web cat /etc/resolv.conf
# Restart the network
docker-compose down
docker-compose up -d
Production Failure Scenarios
Compose stacks fail in ways that are not always obvious. Here are the most common issues.
Volume Permission Issues After Image Update
When you update an image, the UID/GID the container runs as may change. If the volume has data owned by the old UID, the new container cannot read or write.
Symptoms: “Permission denied” errors immediately after docker-compose pull.
Diagnosis:
# Check container user
docker-compose exec web id
# Check volume ownership
ls -la ./data
Mitigation: Always test image updates in staging. Use a named volume instead of a bind mount for application data so ownership is managed by Docker.
Circular Dependency Deadlock
If service A depends on B and B depends on A, Compose may hang at startup.
Symptoms: docker-compose up hangs, services never start, logs show services waiting on each other.
Diagnosis:
# Check your depends_on configuration
docker-compose config | grep -A5 depends_on
Mitigation: Review depends_on configuration. Use condition: service_healthy with health checks instead of simple dependency ordering.
Secrets File Missing on First Start
If a service requires a secret file that does not exist when Compose starts, the service fails.
Symptoms: ERROR: file not found at startup, even though the file was supposed to be created by another service.
Diagnosis:
# Check if secret file exists
ls -la ./secrets/db-password.txt
Mitigation: Use docker-compose up --exit-code-from to capture failures, or add a startup health check that validates dependencies exist before starting dependent services.
Capacity Estimation
How many containers can a single host support? This depends on CPU, memory, and network capacity, not just Docker.
Rough guidelines for a typical host (4 cores, 8GB RAM):
| Container Type | Containers per Host |
|---|---|
| Lightweight (nginx, redis) | 50-100 |
| Medium (Node.js, Python) | 10-30 |
| Heavy (JVM, databases) | 3-8 |
These are soft limits. The real constraint is your application resource usage. Monitor actual consumption with docker stats.
Memory estimation:
# Check container memory usage
docker stats --no-stream
# Each Node.js container ~100-300MB
# Each Python worker ~200-500MB
# Each PostgreSQL ~500MB-2GB
Observability
Logging Configuration
Configure Compose to send logs to a centralized system:
services:
web:
image: myapp:latest
logging:
driver: "json-file"
options:
max-size: "10m"
max-file: "3"
For production, use the syslog or fluentd logging driver to send logs to a central log aggregator.
Centralized Logging Example
services:
web:
image: myapp:latest
logging:
driver: fluentd
options:
fluentd-address: localhost:24224
tag: web
Health Checks for Services
Add health checks to ensure services are genuinely ready:
services:
api:
image: myapi:latest
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:3000/health"]
interval: 30s
timeout: 3s
retries: 3
start_period: 10s
Use condition: service_healthy in depends_on to ensure services only start after their dependencies are genuinely ready.
Security Checklist
Use this checklist when deploying Docker Compose to production:
- Store secrets in environment variables or Docker secrets, not in YAML files committed to version control
- Use specific image tags, not
latest— pins versions for reproducibility - Scan images for vulnerabilities before deployment (
trivy image) - Run services as non-root user (
user: "1001:1001") - Use read-only root filesystems where possible (
read_only: true) - Limit container capabilities (
cap_drop: ALL) - Separate services by trust boundary — do not put untrusted services in the same Compose stack
- Rotate secrets regularly — implement secret rotation procedures
- Use TLS for any internal service-to-service communication
- Set resource limits (memory, CPU) to prevent one service from starving others
Conclusion
Docker Compose simplifies development and deployment of multi-container applications. You define your stack once in YAML, then run a single command to start everything.
The patterns covered here apply broadly: define services, configure networking and volumes, manage environment variables and secrets, and separate development from production configurations.
For local development, Compose is often all you need. For production deployment, understanding Compose concepts provides a foundation for Kubernetes, which builds on these same ideas with additional complexity.
To learn more about building optimized images for Compose, see Multi-Stage Builds. For persistent storage in Compose applications, explore Docker Volumes.
Quick Recap Checklist
Use this checklist when working with Docker Compose:
- Define all services, networks, and volumes in a
docker-compose.ymlfile - Use
depends_onwithcondition: service_healthyfor service startup ordering - Store secrets outside the YAML file — use
.envfiles or Docker secrets - Set resource limits (memory, CPU) for each service
- Add health checks to critical services
- Use separate Compose files for dev/staging/prod (
-f docker-compose.yml -f docker-compose.prod.yml) - Never commit secrets or environment files with credentials to version control
- Test your Compose configuration with
docker-compose configbefore running - Monitor container resource usage with
docker stats - Log to stdout/stderr and let Docker handle log aggregation
Category
Related Posts
Multi-Stage Builds: Minimal Production Docker Images
Learn how multi-stage builds dramatically reduce image sizes by separating build-time and runtime dependencies, resulting in faster deployments and smaller attack surfaces.
Docker Fundamentals: From Images to Production Containers
Master Docker containers, images, Dockerfiles, docker-compose, volumes, and networking. A comprehensive guide for developers getting started with containerization.
Container Images: Building, Optimizing, and Distributing
Learn how Docker container images work, layer caching strategies, image optimization techniques, and how to publish your own images to container registries.