Docker Fundamentals
Learn Docker containerization fundamentals: images, containers, volumes, networking, and best practices for building and deploying applications.
Docker Fundamentals
Docker has transformed how we build, ship, and run applications. Before containers, developers wrestled with “works on my machine” problems. Operations teams struggled with inconsistent environments across development, staging, and production. Docker solved this by containerizing applications with their dependencies, making deployments predictable across any infrastructure.
This guide covers Docker fundamentals: images, containers, volumes, networking, and the essential commands you need. If you are new to containers, this is where you start.
What is Docker
Docker packages applications into containers. A container bundles your code, runtime, system tools, libraries, and settings into a single, executable unit. Unlike virtual machines, containers share the host kernel and run as isolated processes. They start in seconds rather than minutes and use far less memory.
The key insight: your application and all its dependencies ship together. The container runs the same whether on your laptop, a server in your data center, or a cloud VM. No more “it worked in staging but broke in production.”
Docker provides:
- Consistent environments from development through production
- Isolation between applications and their dependencies
- Portability across any infrastructure that runs Docker
- Resource efficiency compared to full virtual machines
Images and Containers
A Docker image is a read-only template with instructions for creating a container. Images are layered—each instruction creates a new layer in the image. Layers are cached and shared across images, making builds fast and images small.
A container is a runnable instance of an image. You can create, start, stop, move, or delete a container. Each container is isolated—changes inside a container do not affect other containers or the host system.
# Pull an image from Docker Hub
docker pull ubuntu:22.04
# List local images
docker images
# Run a container from an image
docker run -it ubuntu:22.04 /bin/bash
# List running containers
docker ps
# List all containers (including stopped)
docker ps -a
# Stop a running container
docker stop my_container
# Remove a stopped container
docker rm my_container
Interactive Container Session
The -it flag gives you an interactive terminal. When you exit the shell, the container stops. To keep a container running in the background, use -d:
# Run container in background
docker run -d --name my_ubuntu ubuntu:22.04 sleep infinity
# Attach to running container
docker exec -it my_ubuntu /bin/bash
# View container logs
docker logs my_container
# Follow log output
docker logs -f my_container
Creating Docker Images
You create images with a Dockerfile—a text file with instructions for building the image.
Basic Dockerfile
# Start from a base image
FROM node:20-alpine
# Set working directory
WORKDIR /app
# Copy package files
COPY package*.json ./
# Install dependencies
RUN npm ci --only=production
# Copy application code
COPY . .
# Expose port
EXPOSE 3000
# Define environment variable
ENV NODE_ENV=production
# Run the application
CMD ["node", "server.js"]
Key instructions:
- FROM sets the base image
- WORKDIR creates and sets the working directory
- COPY copies files from host to image
- RUN executes commands during build
- EXPOSE documents the port the container listens on
- ENV sets environment variables
- CMD specifies what to run when the container starts
Building Images
# Build with tag
docker build -t my-app:1.0.0 .
# Build with no cache (force rebuild)
docker build --no-cache -t my-app:1.0.0 .
# List images
docker images
# Remove an image
docker rmi my-app:1.0.0
# Tag an existing image
docker tag my-app:1.0.0 myregistry.io/my-app:1.0.0
Managing Data with Volumes
Containers are ephemeral—deleted container means lost data. Docker volumes persist data outside the container filesystem.
Volume Types
Named volumes persist data and are managed by Docker:
# Create a volume
docker volume create my_data
# Run container with volume
docker run -d --name db \
-v my_data:/var/lib/postgresql/data \
postgres:15
# Inspect volume
docker volume inspect my_data
# List volumes
docker volume ls
# Remove unused volumes
docker volume prune
Bind mounts map host directories into containers:
# Mount host directory
docker run -d --name dev_app \
-v $(pwd):/app \
-v /app/node_modules \
node:20
# Read-only bind mount
docker run -d --name prod_app \
-v $(pwd):/app:ro \
node:20
tmpfs mounts store data in memory—useful for sensitive data you do not want persisted:
# Store secrets in memory
docker run -d --name secrets \
--tmpfs /run/secrets \
redis:7
Container File Systems
Each container has its own filesystem. Changes inside a container do not affect the image. The container filesystem layers:
- Read-only image layers
- Writable container layer (thin R/W layer)
- Container-specific configuration
Container Networking
Docker provides networking for containers to communicate. The default bridge network lets containers talk to each other and the host.
Network Drivers
Bridge is the default driver. Containers on the same bridge network can communicate via container name.
# Create a bridge network
docker network create my_network
# Run containers on the network
docker run -d --name app --network my_network my-app
docker run -d --name db --network my_network postgres:15
# Containers can now reach each other by name
# app can connect to db:5432
Host removes network isolation—container uses host network directly:
docker run --network host my-service
Overlay connects containers across multiple Docker hosts (for Swarm):
docker network create --driver overlay my_overlay
None disables networking:
docker run --network none isolated-app
Port Mapping
Expose container ports to the host for external access:
# Map container port 3000 to host port 8080
docker run -d -p 8080:3000 my-app
# Map to random available port
docker run -d -P my-app
# List port mappings
docker port my-app
Docker Compose
Docker Compose manages multi-container applications. You define services, networks, and volumes in a YAML file, then spin up everything with one command.
docker-compose.yml Example
version: "3.8"
services:
app:
build: .
ports:
- "8080:3000"
environment:
- NODE_ENV=production
- DATABASE_URL=postgres://db:5432/myapp
depends_on:
- db
volumes:
- ./data:/app/data
networks:
- frontend
- backend
db:
image: postgres:15
environment:
- POSTGRES_DB=myapp
- POSTGRES_USER=user
- POSTFGRES_PASSWORD=secret
volumes:
- db_data:/var/lib/postgresql/data
networks:
- backend
redis:
image: redis:7-alpine
networks:
- backend
volumes:
db_data:
networks:
frontend:
backend:
Compose Commands
# Start all services
docker-compose up -d
# View logs
docker-compose logs -f app
# Scale a service
docker-compose up -d --scale app=3
# Stop and remove containers
docker-compose down
# Stop and remove volumes
docker-compose down -v
# Rebuild after code changes
docker-compose up -d --build
Essential Commands Reference
Container Lifecycle
docker run [options] image [command] # Create and start container
docker start container # Start existing container
docker stop container # Stop running container
docker restart container # Stop and start
docker pause container # Suspend processes
docker unpause container # Resume processes
docker rm container # Remove stopped container
docker kill container # Force-stop container
Inspecting Containers
docker ps # List running containers
docker ps -a # List all containers
docker logs container # View logs
docker inspect container # Low-level info
docker diff container # Filesystem changes
docker stats container # Resource usage
docker top container # Running processes
Image Management
docker images # List local images
docker pull image:tag # Download image
docker rmi image # Remove image
docker build -t name:tag . # Build from Dockerfile
docker tag source:tag target:tag # Create alias
docker history image # Image layers
docker save -o file.tar image # Export to file
docker load -i file.tar # Import from file
Cleanup Commands
docker system df # Disk usage
docker system prune # Remove unused data
docker container prune # Remove stopped containers
docker image prune # Remove dangling images
docker volume prune # Remove unused volumes
docker network prune # Remove unused networks
Common Patterns
Running a Web Server
# Run nginx serving static files
docker run -d \
--name nginx \
-p 80:80 \
-v $(pwd)/html:/usr/share/nginx/html:ro \
nginx:alpine
Database Container
# Run PostgreSQL with persistent storage
docker run -d \
--name postgres \
-e POSTGRES_DB=myapp \
-e POSTGRES_USER=user \
-e POSTGRES_PASSWORD=secret \
-v postgres_data:/var/lib/postgresql/data \
postgres:15
Development Environment
# Run Node app with live reload
docker run -d \
--name node_dev \
-p 3000:3000 \
-v $(pwd):/app \
-v /app/node_modules \
node:20 npm start
Running Tests
# Run tests in isolated container
docker run --rm \
-v $(pwd):/app \
-w /app \
node:20 npm test
Best Practices
Image Optimization
Use specific image tags. node:latest changes, breaking reproducibility. Use node:20-alpine or node:20.11.0-alpine3.19.
Minimize layers. Combine RUN commands where it makes sense. Order instructions from least to most frequently changing.
# Bad: Multiple layers for related operations
RUN apt-get update
RUN apt-get install -y curl
RUN apt-get clean
# Good: Single layer
RUN apt-get update && \
apt-get install -y curl && \
apt-get clean && \
rm -rf /var/lib/apt/lists/*
Use multi-stage builds. Keep production images small by copying only what you need:
# Build stage
FROM node:20 AS builder
WORKDIR /app
COPY . .
RUN npm run build
# Production stage
FROM node:20-alpine
WORKDIR /app
COPY --from=builder /app/dist ./dist
COPY --from=builder /app/node_modules ./node_modules
CMD ["node", "dist/server.js"]
Security
Do not run as root. Create a user and switch to it:
RUN addgroup -g 1001 -S appuser && \
adduser -S appuser -u 1001
USER appuser
Scan images for vulnerabilities:
docker scan my-app:1.0.0
Use official base images. Verify images come from trusted sources.
Resource Limits
Always set memory and CPU limits:
docker run -d \
--name my-app \
--memory="512m" \
--cpus="0.5" \
my-app:1.0.0
Health Checks
Add health checks to your Dockerfiles:
HEALTHCHECK --interval=30s --timeout=3s \
CMD curl -f http://localhost:3000/health || exit 1
When to Use / When Not to Use Docker
Good Use Cases
- Application packaging with all dependencies
- Microservices deployment where isolation matters
- CI/CD pipelines for consistent build and test environments
- Development environments matching production
- Scaling applications across multiple hosts
Limitations
- Persistent stateful applications require careful volume management
- GUI applications work but need additional configuration
- Windows-specific software needs Windows containers (limited support)
- Very lightweight tasks might not justify container overhead
Next Steps
After Docker fundamentals, explore:
- Kubernetes for container orchestration at scale
- Docker networking for advanced multi-host scenarios
- Docker Swarm for simple cluster orchestration
- Container security for production hardening
Check out our Kubernetes guide to learn how Docker containers fit into production microservices deployments.
Category
Related Posts
Docker Fundamentals: From Images to Production Containers
Master Docker containers, images, Dockerfiles, docker-compose, volumes, and networking. A comprehensive guide for developers getting started with containerization.
Container Images: Building, Optimizing, and Distributing
Learn how Docker container images work, layer caching strategies, image optimization techniques, and how to publish your own images to container registries.
Container Registry: Image Storage, Scanning, and Distribution
Set up and secure container registries for storing, scanning, and distributing container images across your CI/CD pipeline and clusters.