You have built a Docker image. You can run a container. But the moment your application needs a database, a cache layer, a reverse proxy, or a background worker, you are staring at four separate docker run commands—each with a dozen flags for ports, volumes, networks, and environment variables. One typo and the whole stack breaks. That is the exact problem Docker Compose solves.
Docker Compose lets you describe your entire multi-container application in a single YAML file. Instead of memorizing long docker run incantations, you declare what you want—services, networks, volumes, environment variables—and bring everything up with one command: docker compose up. This guide will take you from zero to a working multi-service stack in about ten minutes.
Writing YAML for the first time? YAML is whitespace-sensitive and a single indentation error can break your entire file. Use NexTool's free YAML Formatter to validate and auto-format your docker-compose.yml before running it.
What Is Docker Compose and Why You Need It
Docker Compose is a tool for defining and running multi-container Docker applications. At its core, it reads a YAML configuration file and translates it into the exact docker API calls needed to create containers, networks, and volumes.
Here is why you should care:
- Reproducibility — your entire stack is defined in a single file that you can commit to version control. Anyone on the team can clone the repo and run
docker compose upto get an identical environment. - Simplicity — instead of writing shell scripts to orchestrate multiple
docker runcommands, you write declarative YAML. The tool handles container creation order, networking, and volume mounting. - Isolation — each project gets its own Compose file and its own isolated network. Your blog's MySQL instance does not collide with your API's PostgreSQL database.
- Speed — spin up a full development environment in seconds. Tear it down just as fast. No leftover containers, no orphan volumes.
Docker Compose ships with Docker Desktop on macOS and Windows. On Linux, Docker Engine 20.10+ includes the docker compose plugin by default. The older standalone docker-compose binary (with the hyphen) still works but is considered legacy. Throughout this guide, we use the modern docker compose syntax (without the hyphen).
The docker-compose.yml Structure
Every Compose file has four top-level keys. Here is the skeleton:
services:
# Define your containers here
web:
image: nginx:alpine
db:
image: postgres:16
networks:
# Custom networks (optional)
backend:
driver: bridge
volumes:
# Named volumes for persistent data (optional)
db-data:
Let us break down each section.
services
The most important section. Each key under services defines a container. The key becomes the service name, which doubles as its hostname on the Compose network. If you define a service called db, other containers in the same Compose file can reach it at db:5432.
networks
By default, Compose creates a single bridge network for your project and attaches all services to it. You only need the networks section when you want multiple isolated networks, custom drivers, or specific subnet configurations. For most development setups, you can omit this entirely.
volumes
Named volumes persist data beyond the lifecycle of a container. When you run docker compose down, containers are destroyed but named volumes survive. This is critical for databases. Without a named volume, your PostgreSQL data vanishes every time you restart the stack.
Note on the version key: Older tutorials show a version: "3.8" line at the top of the file. As of Docker Compose V2, the version key is optional and ignored. You can safely omit it. The Compose specification is now versioned independently of the file format.
Service Configuration: Every Key Directive Explained
The services section is where you spend 90% of your time. Here are the directives you will use most often.
image
Specifies the Docker image to pull from a registry. Use the format image: name:tag. Always pin a specific tag in production instead of using latest.
services:
redis:
image: redis:7.2-alpine
build
Tells Compose to build an image from a Dockerfile instead of pulling one. You can specify a build context directory and optionally a custom Dockerfile name.
services:
api:
build:
context: ./backend
dockerfile: Dockerfile.prod
ports:
- "3000:3000"
ports
Maps container ports to host ports using the format HOST:CONTAINER. This is the equivalent of docker run -p.
ports:
- "8080:80" # host 8080 -> container 80
- "5432:5432" # same port on both sides
environment
Sets environment variables inside the container. You can use the map syntax or the list syntax.
environment:
POSTGRES_USER: myapp
POSTGRES_PASSWORD: secret123
POSTGRES_DB: myapp_db
volumes
Mounts host directories or named volumes into the container. Named volumes use the volume-name:/path syntax. Bind mounts use ./host-path:/container-path.
volumes:
- db-data:/var/lib/postgresql/data # named volume
- ./src:/app/src # bind mount (dev)
depends_on
Controls startup order. Compose starts the dependency first, then the dependent service. Note that depends_on only waits for the container to start, not for the service inside it to be ready. For health-based ordering, use the condition sub-key.
depends_on:
db:
condition: service_healthy
restart
Defines the container restart policy. Options are no, always, on-failure, and unless-stopped. For production, unless-stopped is the most practical choice.
healthcheck
Defines how Docker checks whether the service is healthy. This is especially useful with depends_on: condition: service_healthy.
healthcheck:
test: ["CMD", "pg_isready", "-U", "myapp"]
interval: 5s
timeout: 3s
retries: 5
✎ Validate Your YAML Syntax Instantly →
5 Practical docker-compose.yml Examples
Let us put theory into practice with five real-world stacks you can copy and adapt today.
1. Web Application + PostgreSQL Database
The most common pattern: a web frontend served by Nginx paired with a PostgreSQL database. The database uses a named volume so data persists across restarts.
services:
web:
image: nginx:alpine
ports:
- "80:80"
volumes:
- ./nginx.conf:/etc/nginx/nginx.conf:ro
- ./html:/usr/share/nginx/html:ro
depends_on:
- db
restart: unless-stopped
db:
image: postgres:16-alpine
environment:
POSTGRES_USER: webapp
POSTGRES_PASSWORD: ${DB_PASSWORD}
POSTGRES_DB: webapp_production
volumes:
- pgdata:/var/lib/postgresql/data
healthcheck:
test: ["CMD-SHELL", "pg_isready -U webapp"]
interval: 10s
timeout: 5s
retries: 5
restart: unless-stopped
volumes:
pgdata:
Notice the ${DB_PASSWORD} variable substitution. Compose reads this from a .env file in the same directory or from your shell environment. We cover this pattern in detail below.
2. Node.js API + Redis Cache
A Node.js Express API that uses Redis for session storage and caching. The API is built from a local Dockerfile while Redis uses the official image.
services:
api:
build:
context: .
dockerfile: Dockerfile
ports:
- "3000:3000"
environment:
NODE_ENV: development
REDIS_URL: redis://cache:6379
volumes:
- ./src:/app/src
- /app/node_modules
depends_on:
cache:
condition: service_healthy
restart: unless-stopped
cache:
image: redis:7.2-alpine
ports:
- "6379:6379"
volumes:
- redis-data:/data
healthcheck:
test: ["CMD", "redis-cli", "ping"]
interval: 5s
timeout: 3s
retries: 5
restart: unless-stopped
volumes:
redis-data:
The bind mount ./src:/app/src means code changes on your host are immediately reflected inside the container—no rebuild needed. The anonymous volume /app/node_modules prevents the host's node_modules from overwriting the container's installed dependencies.
3. WordPress + MySQL (Classic CMS Stack)
WordPress is one of the most popular Docker Compose use cases. This stack gives you a fully functional WordPress site with persistent storage in under a minute.
services:
wordpress:
image: wordpress:6.4-apache
ports:
- "8080:80"
environment:
WORDPRESS_DB_HOST: mysql
WORDPRESS_DB_USER: wp_user
WORDPRESS_DB_PASSWORD: ${WP_DB_PASSWORD}
WORDPRESS_DB_NAME: wordpress
volumes:
- wp-content:/var/www/html/wp-content
depends_on:
mysql:
condition: service_healthy
restart: unless-stopped
mysql:
image: mysql:8.0
environment:
MYSQL_ROOT_PASSWORD: ${MYSQL_ROOT_PASSWORD}
MYSQL_DATABASE: wordpress
MYSQL_USER: wp_user
MYSQL_PASSWORD: ${WP_DB_PASSWORD}
volumes:
- mysql-data:/var/lib/mysql
healthcheck:
test: ["CMD", "mysqladmin", "ping", "-h", "localhost"]
interval: 10s
timeout: 5s
retries: 5
restart: unless-stopped
volumes:
wp-content:
mysql-data:
After running docker compose up -d, open http://localhost:8080 and you will see the WordPress installation wizard. The entire CMS, database, and persistent storage—defined in 35 lines of YAML.
4. Full Development Environment (Frontend + Backend + Database + Adminer)
A realistic development stack with a React frontend, a Python Flask API, a PostgreSQL database, and Adminer for database management through a web UI.
services:
frontend:
build:
context: ./frontend
target: development
ports:
- "5173:5173"
volumes:
- ./frontend/src:/app/src
environment:
VITE_API_URL: http://localhost:8000
depends_on:
- api
api:
build:
context: ./backend
ports:
- "8000:8000"
environment:
DATABASE_URL: postgresql://dev:devpass@db:5432/devdb
FLASK_ENV: development
FLASK_DEBUG: "1"
volumes:
- ./backend:/app
depends_on:
db:
condition: service_healthy
db:
image: postgres:16-alpine
environment:
POSTGRES_USER: dev
POSTGRES_PASSWORD: devpass
POSTGRES_DB: devdb
ports:
- "5432:5432"
volumes:
- dev-pgdata:/var/lib/postgresql/data
- ./init.sql:/docker-entrypoint-initdb.d/init.sql
healthcheck:
test: ["CMD-SHELL", "pg_isready -U dev"]
interval: 5s
timeout: 3s
retries: 5
adminer:
image: adminer:latest
ports:
- "8888:8080"
depends_on:
- db
volumes:
dev-pgdata:
This is the power of Compose for development. Every team member runs the same stack. No more "works on my machine" problems. The Adminer service gives you a lightweight phpMyAdmin-style interface at http://localhost:8888 to inspect and query the database visually.
5. Multi-Service API (Gateway + Auth + Users + Database)
A microservices-style architecture where an Nginx gateway routes traffic to two backend services, each connecting to a shared database. Custom networks isolate the frontend-facing services from the backend layer.
services:
gateway:
image: nginx:alpine
ports:
- "80:80"
volumes:
- ./nginx/gateway.conf:/etc/nginx/conf.d/default.conf:ro
depends_on:
- auth-service
- user-service
networks:
- frontend
- backend
restart: unless-stopped
auth-service:
build: ./services/auth
environment:
DB_HOST: db
DB_PORT: "5432"
JWT_SECRET: ${JWT_SECRET}
networks:
- backend
restart: unless-stopped
user-service:
build: ./services/users
environment:
DB_HOST: db
DB_PORT: "5432"
networks:
- backend
restart: unless-stopped
db:
image: postgres:16-alpine
environment:
POSTGRES_USER: platform
POSTGRES_PASSWORD: ${DB_PASSWORD}
POSTGRES_DB: platform_db
volumes:
- platform-data:/var/lib/postgresql/data
networks:
- backend
restart: unless-stopped
networks:
frontend:
backend:
internal: true
volumes:
platform-data:
The backend network is marked as internal: true, which means it has no external connectivity. The auth and user services cannot be reached directly from the host—all traffic must flow through the gateway. This is a practical security pattern for production-like environments.
Essential Docker Compose Commands
Once your docker-compose.yml is written, these are the commands you will run daily.
# Start all services (foreground, shows logs)
docker compose up
# Start all services in the background
docker compose up -d
# Stop and remove all containers, networks
docker compose down
# Stop and remove everything INCLUDING volumes (destroys data!)
docker compose down -v
# View logs (all services)
docker compose logs
# Follow logs for a specific service
docker compose logs -f api
# List running containers in this project
docker compose ps
# Execute a command inside a running container
docker compose exec db psql -U webapp -d webapp_production
# Run a one-off command in a new container
docker compose run --rm api npm test
# Rebuild images (after Dockerfile changes)
docker compose build
# Rebuild and restart
docker compose up -d --build
# Pull latest images for all services
docker compose pull
# Scale a service to multiple instances
docker compose up -d --scale worker=3
The most common workflow during development is: docker compose up -d --build when you change your Dockerfile, and just docker compose up -d when you only change application code (assuming you have bind mounts for hot-reloading).
Environment Variables and the .env File
Hardcoding passwords and API keys directly in your Compose file is a security risk and makes the file less portable. Docker Compose has first-class support for environment variable substitution.
Create a .env file in the same directory as your docker-compose.yml:
# .env
DB_PASSWORD=supersecret123
MYSQL_ROOT_PASSWORD=rootpass456
WP_DB_PASSWORD=wppass789
JWT_SECRET=my-signing-key-here
NODE_ENV=development
Then reference these variables in your Compose file with ${VARIABLE_NAME} syntax:
environment:
POSTGRES_PASSWORD: ${DB_PASSWORD}
JWT_SECRET: ${JWT_SECRET}
Compose automatically loads the .env file from the project directory. You can also specify a custom env file with docker compose --env-file .env.production up.
Critical: Add .env to your .gitignore immediately. Never commit secrets to version control. Instead, commit a .env.example file with placeholder values so team members know which variables are required.
You can also pass environment variables from a file directly into a container using the env_file directive. The difference is that env_file loads variables into the container, while ${} substitution resolves variables in the Compose file itself.
services:
api:
image: myapi:latest
env_file:
- ./config/api.env
- ./config/shared.env
📄 Format JSON Config Files →
Best Practices: Development vs Production
Your development and production Compose files should differ significantly. Here are the key distinctions.
Development Best Practices
- Use bind mounts for source code so changes appear instantly without rebuilding the image.
- Expose debug ports (database, cache, admin UIs) to localhost for easy inspection.
- Enable hot-reloading with environment variables like
FLASK_DEBUG=1orNODE_ENV=development. - Use
builddirectives to build images locally from source. - Skip restart policies or use
restart: noso containers stop cleanly when you Ctrl+C.
Production Best Practices
- Pin exact image tags like
postgres:16.2-alpineinstead ofpostgres:latest. Floating tags cause unpredictable deployments. - Set resource limits to prevent a single service from consuming all system memory:
deploy:
resources:
limits:
cpus: "1.0"
memory: 512M
reservations:
cpus: "0.25"
memory: 128M
- Use
restart: unless-stoppedso containers recover from crashes automatically but respect manual stops. - Remove bind mounts and bake all application code into the image during the build step.
- Do not expose internal ports to the host. Only the reverse proxy or gateway should have a
portsmapping. - Use named volumes for all persistent data. Bind mounts tied to host paths make deployments brittle.
- Enable logging drivers to ship logs to a centralized system:
logging:
driver: json-file
options:
max-size: "10m"
max-file: "3"
- Use Docker secrets or external secret managers instead of environment variables for sensitive data in Swarm-mode deployments.
Using Multiple Compose Files
A clean pattern is to maintain a base file and override files for each environment. Compose merges them automatically:
# Development
docker compose -f docker-compose.yml -f docker-compose.dev.yml up
# Production
docker compose -f docker-compose.yml -f docker-compose.prod.yml up -d
The base file defines the services and their relationships. The override file changes ports, volumes, environment variables, and restart policies for the target environment.
Troubleshooting Common Issues
When things go wrong with Docker Compose, the fix is usually one of these:
- Port already in use — another process or container is using the host port. Check with
lsof -i :8080and either stop the conflicting process or change the port mapping in your Compose file. - Service cannot connect to database — the database container started but the service inside it is not ready yet. Use
depends_onwithcondition: service_healthyand a proper healthcheck instead of baredepends_on. - Volume permission errors — the container user does not match the host file owner. For bind mounts in development, you may need to set
user: "${UID}:${GID}"in the service definition. - Changes not taking effect — if you modified the Dockerfile, you must rebuild with
docker compose up -d --build. Simply runningupreuses the cached image. - YAML syntax errors — a misplaced space or a tab character will break the entire file. YAML does not allow tabs for indentation. Use a YAML validator to catch these instantly.
- Orphan containers from renamed services — if you rename a service, the old container lingers. Run
docker compose down --remove-orphansto clean up.
Wrapping Up
Docker Compose turns multi-container orchestration from a chore into a declaration. Instead of managing containers one by one, you describe your desired state in YAML and let Compose figure out the rest. Whether you are setting up a local development environment, running integration tests, or deploying a small production stack, the workflow is the same: define it in docker-compose.yml, run docker compose up, done.
The key takeaways from this guide:
- Services define your containers. Each service gets a name that doubles as its network hostname.
- Volumes persist data. Always use named volumes for databases.
- Networks isolate traffic. The default network handles most cases; use custom networks for microservice architectures.
- Environment variables belong in a
.envfile, never hardcoded in your Compose file and never committed to git. - Healthchecks + depends_on conditions solve startup ordering properly.
- Keep separate Compose overrides for development and production.
If you are writing YAML for the first time or debugging a tricky indentation issue, NexTool's YAML Formatter validates and auto-formats your file in seconds. For converting between configuration formats, the YAML/JSON Converter handles the translation instantly. Both tools run entirely in your browser with no sign-up required.