Kubernetes Network Policies: Securing Pod-to-Pod Communication
Implement microsegmentation in Kubernetes using Network Policies to control traffic flow between pods and enforce zero-trust networking.
Kubernetes Network Policies: Securing Pod-to-Pod Communication
By default, all pods in a Kubernetes cluster can communicate with all other pods. This flat network model works during development but creates security risks in production. A compromised pod can reach any other pod in the cluster, including sensitive services like databases and secrets management.
Kubernetes Network Policies let you restrict pod-to-pod communication based on labels, namespaces, and ports. This microsegmentation approach implements zero-trust networking inside the cluster.
This post covers how Network Policies work, default deny patterns, and practical policy configurations.
For Kubernetes basics, see the Kubernetes fundamentals post. For services and Ingress, see the Services and Networking post.
When to Use / When Not to Use
When Network Policies make sense
Multi-tenant clusters need them by default. If you do not control what runs in every namespace, a compromised workload could reach your database.
Compliance mandates often require them. PCI-DSS, SOC2, HIPAA all have requirements around network segmentation. Network policies are how you implement that inside Kubernetes.
Zero-trust means no flat network. A compromised pod should not automatically have access to everything. Restrict what can reach your database, your cache, your secrets manager.
When to skip them
Single-tenant clusters where you control every workload are lower risk. If every person who can deploy to your cluster is trusted, the flat network is less of a concern.
External segmentation can be enough. Cloud VPC security groups that isolate your Kubernetes nodes from each other and from other services provide some protection. Network policies then add defense in depth.
Early development is not the time. The operational overhead of debugging why your service cannot reach its database when you forgot to allow port 5432 slows down iteration.
Traffic Filtering Flow
flowchart TD
P1[Pod A<br/>app=web-frontend] -->|Egress| NP1{Network Policy<br/>on Pod A}
NP1 -->|Allow to<br/>DNS| DNS[CoreDNS<br/>:53]
NP1 -->|Allow to<br/>:8080| P2[Pod B<br/>app=api-backend]
NP1 -->|Block all<br/>else| X[Dropped]
P2 -->|Ingress| NP2{Network Policy<br/>on Pod B}
NP2 -->|Allow from<br/>web-frontend| P1
NP2 -->|Block all<br/>else| X2[Dropped]
Network policies are pod-scoped. Each pod has its own ingress and egress rules. Without a policy, all traffic is allowed by default on most CNI plugins. With a policy, only explicitly allowed traffic is permitted.
How Network Policies Work
A Network Policy is a namespaced resource that selects pods and defines ingress and egress rules. The policy controller (part of the CNI plugin) enforces the rules by configuring network filters on the node.
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: api-backend-policy
namespace: production
spec:
podSelector:
matchLabels:
app: api-backend
policyTypes:
- Ingress
- Egress
ingress:
- from:
- podSelector:
matchLabels:
app: web-frontend
ports:
- protocol: TCP
port: 8080
egress:
- to:
- podSelector:
matchLabels:
app: postgres
ports:
- protocol: TCP
port: 5432
This policy allows the api-backend pod to receive traffic from web-frontend pods on port 8080, and to send traffic to postgres pods on port 5432.
Policy evaluation order
Network Policies are additive. If multiple policies select the same pod, the union of all allowed traffic is permitted. This means you must carefully design policies to avoid unintended exposure.
Some CNI providers like Calico support policy priorities to resolve conflicts:
spec:
order: 100
Lower order values have higher priority.
Default Deny All Ingress and Egress
Start with a default deny policy for each namespace, then explicitly allow required traffic. This follows the principle of least privilege.
Default deny ingress
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny-ingress
namespace: production
spec:
podSelector: {}
policyTypes:
- Ingress
podSelector: {} selects all pods in the namespace. With no ingress rules, all incoming traffic is blocked.
Default deny egress
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny-egress
namespace: production
spec:
podSelector: {}
policyTypes:
- Egress
This blocks all outgoing traffic until you add policies allowing specific destinations.
Combined default deny
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny-all
namespace: production
spec:
podSelector: {}
policyTypes:
- Ingress
- Egress
Apply this before deploying any application, then add allow policies as you deploy services.
Allowing Specific Traffic with Pod Selectors
After setting default deny, allow specific traffic patterns:
Web frontend to API backend
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-web-to-api
namespace: production
spec:
podSelector:
matchLabels:
app: api-backend
policyTypes:
- Ingress
ingress:
- from:
- podSelector:
matchLabels:
app: web-frontend
ports:
- protocol: TCP
port: 8080
API backend to database
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-api-to-db
namespace: production
spec:
podSelector:
matchLabels:
app: postgres
policyTypes:
- Ingress
ingress:
- from:
- podSelector:
matchLabels:
app: api-backend
ports:
- protocol: TCP
port: 5432
Application layer filtering
For more complex rules, use namespaceSelector to allow traffic from specific namespaces:
ingress:
- from:
- namespaceSelector:
matchLabels:
name: frontend
podSelector:
matchLabels:
app: web-frontend
- namespaceSelector:
matchLabels:
name: monitoring
podSelector:
matchLabels:
app: prometheus
This allows traffic from frontend namespace pods labeled app: web-frontend and from monitoring namespace pods labeled app: prometheus.
Namespace-Level Policies
Apply policies at the namespace level to protect entire namespaces or enforce compliance requirements:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: production-isolation
namespace: production
spec:
podSelector: {}
policyTypes:
- Ingress
ingress:
- from:
- namespaceSelector:
matchLabels:
name: production
- namespaceSelector:
matchLabels:
name: ingress-nginx
This allows traffic only from pods in the production namespace or from the ingress-nginx namespace.
Isolating system namespaces
Protect Kubernetes system namespaces like kube-system from application traffic:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: deny-system-namespaces
namespace: kube-system
spec:
podSelector: {}
policyTypes:
- Ingress
ingress:
- from:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: kube-system
Application pods cannot reach system services unless explicitly allowed.
DNS Egress Rules
Pods need DNS resolution to work properly. DNS runs in the kube-system namespace on port 53 (TCP and UDP). Allow DNS traffic in your egress policies:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-dns-egress
namespace: production
spec:
podSelector: {}
policyTypes:
- Egress
egress:
# Allow DNS
- ports:
- protocol: UDP
port: 53
- protocol: TCP
port: 53
to:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: kube-system
# Allow other necessary egress (example)
- ports:
- protocol: TCP
port: 443
to:
- namespaceSelector: {}
The DNS rule uses to to specify destination namespaces. Without this, pods cannot resolve service names or external domains.
CNI Providers and Policy Enforcement
Network Policy support varies by CNI provider. Not all providers implement all policy features:
| Provider | Ingress | Egress | Policy Priorities | DNAT |
|---|---|---|---|---|
| Calico | Yes | Yes | Yes | Yes |
| Cilium | Yes | Yes | Yes | Yes |
| Weave | Yes | Yes | No | No |
| Flannel | No | No | No | No |
Calico NetworkPolicy
Calico extends standard NetworkPolicy with additional features:
apiVersion: projectcalico.org/v3
kind: NetworkPolicy
metadata:
name: api-isolation
namespace: production
spec:
selector: app == 'api-backend'
types:
- Ingress
- Egress
ingress:
- action: Allow
source:
selector: app == 'web-frontend'
destination:
ports:
- 8080
- action: Deny
egress:
- action: Allow
destination:
selector: app == 'postgres'
ports:
- 5432
- action: Allow
protocol: TCP
destination:
ports:
- 53
Calico’s explicit action: Deny rules make policy intent clearer.
Cilium NetworkPolicy
Cilium uses eBPF for enforcement and supports L7 policies:
apiVersion: cilium.io/v2
kind: CiliumNetworkPolicy
metadata:
name: api-policy
namespace: production
spec:
endpointSelector:
matchLabels:
app: api-backend
ingress:
- fromEndpoints:
- matchLabels:
app: web-frontend
toPorts:
- ports:
- port: "8080"
protocol: TCP
Cilium also supports HTTP, Kafka, and DNS filtering at L7.
Testing Network Policies
Verify your policies work correctly with connectivity tests:
Using kubectl to test connectivity
kubectl run -it --rm test-pod \
--image=busybox \
--restart=Never \
-- wget -q -O- http://api-backend:8080/health
If the connection succeeds, your policy allows the traffic. If it times out, the policy blocks it.
Using a policy visualizer
Tools like Hornet visualize NetworkPolicy rules and help identify unintended exposure. Calico Enterprise includes policy visualization and impact analysis.
Conclusion
Network Policies provide microsegmentation in Kubernetes. Start with default deny policies to block all traffic, then explicitly allow only the traffic your applications need.
Use pod selectors to define which pods a policy applies to. Use namespace selectors to allow traffic from specific namespaces. Remember to allow DNS resolution in your egress rules.
Not all CNI providers support all policy features. Calico and Cilium provide the richest policy implementations. Choose a CNI that supports the policy features your security requirements demand.
Network Policies are one part of a defense-in-depth strategy. Combine them with RBAC, Secrets encryption, and pod security policies for comprehensive cluster security.
Production Failure Scenarios
Policy Blocking All Traffic
A default-deny accidentally applied to the wrong namespace, or an overly broad rule that blocks your application’s actual dependencies. The result is sudden outage with no obvious cause in application logs.
Test in staging first. Apply to production during low-traffic windows. Have a rollback plan.
DNS Resolution Fails After Default Deny Egress
This is the most common mistake. Default deny goes in, DNS stops working, pods cannot resolve service names or reach each other.
DNS runs on port 53 in kube-system. Allow it explicitly before applying default deny egress. This is not optional.
CNI Does Not Support Your Policy Features
Flannel does not support network policies at all. Weave supports basic ingress and egress but not priorities or DNAT. If you write a policy assuming a feature and your CNI does not implement it, the traffic flows anyway.
Check CNI capabilities before designing policies. Calico and Cilium have the most complete implementations.
CNI Provider Trade-off Comparison
| CNI | Network Policies | L7 Filtering | Egress | Complexity |
|---|---|---|---|---|
| Calico | Full | Yes (Tiered) | Yes | Medium |
| Cilium | Full | Yes (HTTP) | Yes | Medium |
| Weave | Basic | No | Limited | Low |
| Flannel | None | No | No | Lowest |
| AWS VPC CNI | Partial | No | Yes | Medium |
For production security, Calico or Cilium are the practical choices. Flannel and Weave work for development clusters where network policy enforcement is not a requirement.
Compliance Checklist
Network policies help meet compliance requirements for network isolation:
PCI-DSS:
- Req 1.3.1 — Restrict traffic between cardholder data environment and other networks
- Req 2.2.1 — Restrict traffic to only necessary protocols and ports
SOC 2:
- CC6.1 — Restrict access to systems and data based on need-to-know
- CC6.6 — Enforce network boundaries
# Default deny for PCI-DSS scoped namespace
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny
namespace: cardholder-data
spec:
podSelector: {}
policyTypes:
- Ingress
- Egress
Key checklist items:
- Default deny applied to all untrusted namespaces
- Explicit allow rules for required traffic paths only
- DNS egress allowed on port 53 (TCP and UDP)
- Payment card data namespace isolated from general workloads
- Audit logging enabled for security group changes
- Annual review of policy effectiveness
L7 Policy Examples with Cilium
Cilium supports HTTP-level network policies for fine-grained L7 control:
apiVersion: cilium.io/v2
kind: CiliumNetworkPolicy
metadata:
name: api-access-control
namespace: production
spec:
endpointSelector:
matchLabels:
app: api-backend
ingress:
- fromEndpoints:
- matchLabels:
app: frontend
toPorts:
- port: "8080"
protocol: TCP
rules:
http:
- method: GET
path: "/api/v1/.*"
- method: POST
path: "/api/v1/users"
- method: GET
path: "/api/v1/health"
allowedHeaders:
- Content-Type
- Authorization
This policy allows only the frontend to reach the API, and only on specific HTTP methods and paths. A request to /api/v1/admin or using DELETE would be blocked.
# Deny specific paths for a microservice
apiVersion: cilium.io/v2
kind: CiliumNetworkPolicy
metadata:
name: deny-admin-paths
namespace: production
spec:
endpointSelector:
matchLabels:
app: internal-api
ingress:
- fromEndpoints:
- matchLabels:
app: web-frontend
toPorts:
- port: "8080"
rules:
http:
- method: GET
path: "/api/v1/.*"
# Implicitly deny POST, PUT, DELETE, etc.
Anti-Patterns
Blanket podSelector: {}
podSelector: {} selects every pod in the namespace, including system pods. Applying a restrictive policy to all pods can break core functionality.
Always scope policies to your application pods specifically.
Ingress Only
Focusing only on ingress means egress is wide open. A compromised pod can still exfiltrate data or call out to malicious servers.
Define both ingress and egress rules. Include DNS.
Forgetting DNS Egress
Default deny plus no DNS rule means no service discovery works. Your pods cannot resolve .cluster.local addresses.
DNS rule is mandatory. Add it before anything else.
Quick Recap Checklist
- Default deny policies applied to untrusted namespaces first
- Only required traffic explicitly allowed per application
- DNS egress allowed (port 53 to cluster DNS service)
- Policies tested in staging before production
- CNI plugin capabilities verified for required features
- Policy visualization used to check for unintended exposure
- Network Policies combined with RBAC and Secrets encryption
- Policy rationale documented and reviewed during security audits
For more on Kubernetes networking, see the Services and Networking post.
Category
Related Posts
Cloud Security: IAM, Network Isolation, and Encryption
Implement defense-in-depth security for cloud infrastructure—identity and access management, network isolation, encryption, and security monitoring.
Kubernetes Services: ClusterIP, NodePort, LoadBalancer, Ingress
Master Kubernetes service types and Ingress controllers to expose your applications inside and outside the cluster with proper load balancing and routing.
Secrets Management: Vault, Kubernetes Secrets, and Env Vars
Learn how to securely manage secrets, API keys, and credentials across microservices using HashiCorp Vault, Kubernetes Secrets, and best practices.