Docker Networking: From Bridge to Overlay
Master Docker's networking models—bridge, host, overlay, and macvlan—for connecting containers across hosts and distributed applications.
Docker Networking: From Bridge to Overlay
Containers are isolated by design. But isolation only gets you so far; real applications need to talk to each other, to the outside world, and across multiple hosts.
Docker provides several networking models, each suited to different scenarios. Understanding them lets you design container communication that is both secure and practical.
Docker Networking Models Overview
Docker ships with four built-in network drivers, plus the ability to use third-party drivers for specialized scenarios.
| Driver | When to Use |
|---|---|
bridge | Default for standalone containers on a single host |
host | When you need maximum network performance and container isolation is less critical |
overlay | Containers running across multiple Docker hosts (Docker Swarm) |
macvlan | When containers need their own MAC address and appear as physical hosts |
none | Completely disable networking for a container |
The default is bridge, but the default bridge network has limitations you will quickly encounter.
When to Use Each Network Driver
Choose the right driver based on your architecture requirements:
-
bridge— Use for development environments and single-host production workloads where containers need to communicate on the same host. Preferred over the default bridge because it provides DNS-based service discovery by container name. -
host— Use when network throughput is critical and you can manage port conflicts manually. Suitable for dedicated infrastructure services like log aggregators or monitoring agents that need direct host network access. -
overlay— Use when deploying services across multiple Docker hosts. The primary choice for Docker Swarm clusters. Avoid if you only need single-host networking — overlay adds VXLAN encapsulation complexity. -
macvlan— Use when containers must appear as physical devices on your network with their own MAC addresses. Required for legacy applications that depend on DHCP or perform network admission control based on MAC addresses. -
none— Use for security-sensitive workloads that should have zero network connectivity. The container will only have the loopback interface.
Bridge Networks for Single-Host Containers
The bridge network is the default. When you run a container without specifying a network, Docker connects it to the default bridge.
How the Default Bridge Works
Docker creates a Linux bridge called docker0 on the host. Containers get virtual ethernet interfaces connected to this bridge. The bridge assigns containers IP addresses from a private subnet (typically 172.17.0.0/16).
Host
+-------------------+
| docker0 bridge | 172.17.0.1
| |
| +-------------+ |
| | container A | | 172.17.0.2
| +-------------+ |
| +-------------+ |
| | container B | | 172.17.0.3
| +-------------+ |
+-------------------+
Containers on the default bridge can reach each other by IP address, but not by container name. DNS resolution by name only works on custom bridge networks.
Custom Bridge Networks
Create your own bridge network to get automatic DNS resolution:
docker network create --driver bridge my_network
Or in Docker Compose:
services:
web:
build: .
networks:
- frontend
api:
build: ./api
networks:
- frontend
networks:
frontend:
driver: bridge
Now web can reach api at api:3000 (or whatever port api exposes). Docker embedded a DNS resolver that handles this automatically.
DNS Resolution Flow on Custom Bridge
sequenceDiagram
participant Web as Web Container
participant Resolver as Docker DNS Resolver
participant API as API Container
Web->>Resolver: Lookup api
Note over Resolver: Check embedded DNS cache
alt Container name found
Resolver-->>Web: Return API container IP
Web->>API: Send request to API:3000
else Container name not in cache
Resolver->>Resolver: Forward to external DNS (8.8.8.8)
Resolver-->>Web: Return resolved IP
Web->>API: Send request
end
Port Mapping
Containers on bridge networks are isolated from the host by default. To expose a container port to the host:
# Map host port 8080 to container port 80
docker run -d -p 8080:80 nginx:latest
# Map multiple ports
docker run -d -p 8080:80 -p 8443:443 nginx:latest
# Bind to specific host interface only
docker run -d -p 127.0.0.1:8080:80 nginx:latest
# Random host port
docker run -d -P nginx:latest # Docker assigns random ports
In Docker Compose:
services:
web:
image: nginx:latest
ports:
- "8080:80" # Host:Container
- "127.0.0.1:8081:80" # Localhost only
- "3000-3010:3000" # Port range
Host Networking Performance
The host network driver removes network namespace isolation entirely. The container shares the host network stack:
docker run --network host nginx:latest
On the default bridge, Docker adds a layer of indirection through the virtual ethernet and bridge. With host networking, there is no indirection. The container binds directly to the host network interfaces.
This matters for applications where network latency is critical. The performance difference is typically 5-10% lower CPU usage for high-throughput networking.
The tradeoff: port conflicts become more likely, and the container has full access to the host network. If two containers both try to bind to port 80, one fails.
services:
nginx:
image: nginx:latest
network_mode: host
# No port mapping needed - container uses host ports directly
Overlay Networks for Multi-Host
When you have multiple Docker hosts, containers on different hosts need a way to communicate. The overlay network driver creates a distributed network across hosts, making all containers appear on the same logical network regardless of which host they run on.
Overlay networks require a key-value store to coordinate state across hosts. When using Docker Swarm, that is built into Swarm mode. For standalone Docker, you need an external key-value store like etcd, Consul, or ZooKeeper.
Docker Swarm Mode Overlay
Swarm mode includes built-in overlay networking:
# Initialize swarm
docker swarm init
# Create an overlay network (works across all swarm nodes)
docker network create --driver overlay my_overlay
Containers can now communicate across hosts using the overlay network. Swarm handles the VXLAN tunneling and distributed routing automatically.
Standalone Overlay with Docker Compose
For standalone Docker without Swarm, you need to set up the key-value store yourself. Docker Compose can use an existing Consul or etcd cluster for overlay networking:
services:
web:
image: myapp:latest
networks:
- frontend
api:
image: myapi:latest
networks:
- frontend
networks:
frontend:
driver: overlay
external: true # Use pre-existing overlay network
How Overlay Networking Works
Overlay uses VXLAN (Virtual Extensible LAN) to encapsulate container traffic. Each container gets an IP on the overlay network. When container A sends a packet to container B on a different host:
- Container A sends packet to the overlay interface
- The host’s Docker overlay driver encapsulates the packet in VXLAN
- The encapsulated packet travels over the physical network to host B
- Host B’s overlay driver decapsulates and delivers to container B
This encapsulation adds small overhead but enables seamless multi-host networking without network team involvement for new IP assignments.
VXLAN Encapsulation Flow
sequenceDiagram
participant A as Container A
participant HA as Host A
participant Network
participant HB as Host B
participant B as Container B
A->>HA: Send packet to Container B IP
HA->>HA: Encapsulate in VXLAN (UDP port 4789)
HA->>Network: Forward encapsulated packet
Network->>HB: Route to Host B
HB->>HB: Decapsulate VXLAN
HB->>B: Deliver original packet
Macvlan for Legacy Integration
Some applications expect to appear as physical machines on the network, with their own MAC address. Maybe they require DHCP. Maybe they do deep packet inspection and expect traffic from a real NIC.
Macvlan creates a virtual interface with a specified MAC address and attaches containers directly to the physical network:
# Create macvlan network attached to eth0
docker network create \
--driver macvlan \
--subnet=192.168.1.0/24 \
--gateway=192.168.1.1 \
-o parent=eth0 \
my_macvlan
services:
legacy_app:
image: legacy:latest
networks:
- my_macvlan
ipam:
config:
- subnet: 192.168.1.0/24
gateway: 192.168.1.1
networks:
my_macvlan:
driver: macvlan
driver_opts:
parent: eth0
The container gets an IP directly from your network DHCP server. To the network, the container looks like any other physical server.
The catch: the container shares the network interface with the host. You need to ensure MAC address filtering on switches does not block the container MACs, and you cannot run macvlan and host networking simultaneously on the same interface.
DNS-Based Service Discovery
Docker’s embedded DNS provides name resolution for containers on user-defined networks. This is how containers find each other without hardcoded IP addresses.
Automatic Name Resolution
On a custom bridge or overlay network, Docker registers container names as DNS entries:
services:
db:
image: postgres:15-alpine
networks:
- backend
hostname: postgres
web:
image: myapp:latest
networks:
- backend
depends_on:
- db
The web container can reach the database at postgres:5432 or db:5432. Both the service name and the hostname are registered.
Custom DNS Entries with Aliases
You can add DNS aliases for a service:
services:
api:
image: myapi:latest
networks:
backend:
aliases:
- api.internal
- api-service
Now other containers can reach the API at api, api.internal, or api-service.
DNS Search Domains
services:
web:
image: myapp:latest
dns_search: "example.com"
The container appends example.com to unqualified hostnames. If the container looks up database, it becomes database.example.com.
Network Troubleshooting
When containers cannot communicate, here is how to debug.
Check Container Network Configuration
# Inspect container network settings
docker inspect -f '{{json .NetworkSettings.Networks}}' mycontainer | jq
# Get container IP address
docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' mycontainer
# Check which networks a container is on
docker inspect mycontainer | jq '.[0].NetworkSettings.Networks'
Test Network Connectivity
# Get a shell in the container
docker exec -it mycontainer sh
# From inside the container, test connectivity
ping api
curl http://api:3000
nslookup api # Check DNS resolution
netcat -zv api 3000 # Check port connectivity
Check Network Driver Status
# List all networks
docker network ls
# Inspect a network
docker network inspect bridge
# Check for orphaned networks (not used by any container)
docker network prune
Common Issues
Container cannot reach another container by name:
- Are they on the same network? Run
docker network inspect <network>to check. - Is DNS resolution working? Try reaching by IP instead of name.
Container cannot reach external networks:
- Check iptables rules on the host:
iptables -L -n - Is IP forwarding enabled?
cat /proc/sys/net/ipv4/ip_forward
Port mapping not working:
- Is something else using the host port?
ss -tlnp | grep 8080 - Is the container actually listening?
docker logs mycontainer
Network Performance Tuning
For high-throughput applications, network performance matters.
Increase Connection Tracking Table Size
On busy Docker hosts, the nf_conntrack table can fill up:
# Check current usage
cat /proc/sys/net/netfilter/nf_conntrack_count
cat /proc/sys/net/netfilter/nf_conntrack_max
# Increase max if needed (set in /etc/sysctl.conf for persistence)
echo 1048576 > /proc/sys/net/netfilter/nf_conntrack_max
Adjust MTU for Overlay Networks
Overlay networks add 50 bytes of header overhead per packet. If your physical network uses jumbo frames (MTU 9000), you can increase the overlay MTU:
docker network create \
--driver overlay \
--opt com.docker.network.driver.mtu=1450 \
my_overlay
Use DNS Cache for High Query Volumes
If your containers make many DNS queries, configure a local DNS cache:
services:
app:
image: myapp:latest
dns:
- 127.0.0.1
- 8.8.8.8
dnsmasq:
image: andyshinn/dnsmasq:latest
network_mode: host
command: --cache-size=1000 --log-queries
Production Failure Scenarios
Docker networking fails in ways that are not always obvious. Here are the most common issues.
Overlay Network Partition
When overlay network nodes lose coordination due to key-value store issues, containers on different hosts can no longer communicate even though the physical network is fine.
Symptoms: containers on host A cannot reach containers on host B, but both can reach external addresses.
Diagnosis:
# Check swarm cluster state
docker node ls
# Check overlay network status
docker network inspect my_overlay
# Check key-value store connectivity
docker info | grep -i kv
Mitigation: Ensure your key-value store (etcd, Consul) has proper HA configuration and network connectivity.
DNS Resolution Failure After Container Restart
Containers sometimes fail to resolve names after restart, particularly when they get new IP addresses and the embedded DNS cache has stale entries.
Symptoms: ping api returns “bad address”, but ping 172.17.0.4 works.
Diagnosis:
# Check container DNS config
docker exec mycontainer cat /etc/resolv.conf
# Test with explicit DNS server
docker exec mycontainer nslookup api 8.8.8.8
Mitigation: Restart affected containers to pick up fresh DNS configuration, or use Docker’s embedded DNS at 127.0.0.11.
IP Address Exhaustion
Default bridge uses 172.17.0.0/16 which provides only 65,000 addresses. With many containers and frequent recreation, you can exhaust the subnet.
Symptoms: docker: Error response from daemon: could not find an available IP address.
Mitigation: Use a custom bridge with a larger subnet, or switch to host networking for high-density workloads.
Anti-Patterns
These patterns cause problems in production. Avoid them.
Using Default Bridge
The default bridge (docker0) does not provide DNS-based service discovery by container name. Containers must reference each other by IP address, which changes on restart.
Always create custom bridge networks:
docker network create --driver bridge my_network
Exposing Unnecessary Ports
Exposing ports to the host increases the attack surface. Only expose what is genuinely needed.
# Anti-pattern: exposing everything
docker run -p 80:80 -p 443:443 -p 8080:8080 myapp
# Right: expose only what you need
docker run -p 80:80 myapp
Port Conflicts with Host Networking
Using --network host means containers share the host network namespace. If two containers both try to bind to port 80, one fails.
Mitigation: Use bridge networking with port mapping, or ensure only one container per host uses a given port.
Security Checklist
Use this checklist when configuring container networking in production:
- Use custom bridge networks instead of the default bridge
- Drop all capabilities (
--cap-drop ALL) for untrusted containers - Do not run containers as
--privileged - Limit exposed host ports to the minimum required
- Use network policies to restrict container-to-container communication
- Avoid
--network hostunless you have a specific reason - Monitor for unexpected cross-host network traffic
- Use overlay networks with encryption (
--opt encrypted) for multi-host production workloads
Trade-off Summary
| Driver | Use Case | Cross-Host | Performance | Port Conflicts |
|---|---|---|---|---|
| Bridge (custom) | Single-host, dev/prod | No (unless overlay) | Good | Low (port mapping) |
| Host | Max throughput, dedicated infra | No | Best | High (no isolation) |
| Overlay | Multi-host (Swarm) | Yes (VXLAN) | Moderate | Low |
| Macvlan | Legacy DHCP/MAC-based apps | Yes | Best | Medium |
| None | Secure isolation | No | N/A | None |
Conclusion
Docker networking provides several models for different scenarios. Bridge networks handle single-host container communication with DNS-based service discovery. Host networking trades isolation for performance. Overlay networks connect containers across multiple hosts. Macvlan integration provides legacy compatibility.
The right network model depends on your architecture. For most applications, custom bridge networks with proper service discovery are sufficient. As you scale to multi-host deployments, overlay networks and Swarm mode provide the necessary integration.
Docker handles most of the complexity automatically, but understanding what is happening underneath helps when things go wrong. For more on connecting containers across hosts, explore Kubernetes networking, which builds on these same principles with additional complexity for pod-to-pod communication.
To understand how containers persist data across restarts, continue to Docker Volumes.
Category
Related Posts
Container Images: Building, Optimizing, and Distributing
Learn how Docker container images work, layer caching strategies, image optimization techniques, and how to publish your own images to container registries.
Container Registry: Image Storage, Scanning, and Distribution
Set up and secure container registries for storing, scanning, and distributing container images across your CI/CD pipeline and clusters.
Docker Fundamentals
Learn Docker containerization fundamentals: images, containers, volumes, networking, and best practices for building and deploying applications.