Kubernetes Workload Resources: Deployments, StatefulSets, and DaemonSets
Understand Kubernetes workload resources—when to use Deployments for stateless apps, StatefulSets for clustered workloads, and DaemonSets for node-level agents.
Kubernetes Workload Resources: Deployments, StatefulSets, and DaemonSets
If you are running applications on Kubernetes, you need a way to manage the pods running your code. Kubernetes does not just schedule pods onto nodes and leave them there. It provides workload resources that handle replication, scaling, rolling updates, and fault tolerance. This post walks through the three most important workload types: Deployment, StatefulSet, and DaemonSet.
If you are new to Kubernetes, start with the Kubernetes fundamentals post before diving into workloads. For advanced orchestration patterns, check the Advanced Kubernetes post.
When to Use / When Not to Use
Use this decision tree to pick the right workload resource:
Does your application need stable network identity
or persistent storage across restarts?
├── NO → Is it a node-level agent (logging, monitoring, networking)?
│ ├── YES → DaemonSet
│ └── NO → Deployment
└── YES → Is it a database, message queue, or leader-elected service?
├── YES → StatefulSet
└── NO (you just need scaling) → Consider if Deployment suffices
Use a Deployment when: web applications, stateless APIs, queue workers. Any workload where pods are interchangeable.
Skip Deployment when: you need stable hostnames, ordered startup, or persistent storage across restarts.
Use a StatefulSet when: you are running databases (PostgreSQL, MySQL, MongoDB), message queues (Kafka, RabbitMQ), or leader-elected services (ZooKeeper, etcd). If the pod name determines which data is yours, you need StatefulSet.
Skip StatefulSet when: your app is stateless. StatefulSets add real complexity around scaling and storage. If you do not need stable identity, Deployment is simpler.
Use a DaemonSet when: you need something on every node. Log collectors, node exporters, CNI plugins, storage daemons. These are infrastructure concerns, not application concerns.
Skip DaemonSet when: your workload scales with user load, not node count. Use Deployment instead.
ReplicaSet and Deployment Patterns
A ReplicaSet makes sure a specific number of identical pod replicas are running at any given time. You rarely create ReplicaSets directly. The Deployment wraps a ReplicaSet and adds declarative update capabilities on top.
apiVersion: apps/v1
kind: Deployment
metadata:
name: web-frontend
namespace: production
spec:
replicas: 3
selector:
matchLabels:
app: web-frontend
template:
metadata:
labels:
app: web-frontend
spec:
containers:
- name: nginx
image: nginx:1.25
ports:
- containerPort: 80
resources:
requests:
memory: "128Mi"
cpu: "250m"
limits:
memory: "256Mi"
cpu: "500m"
The Deployment controller watches the ReplicaSet and creates a new ReplicaSet whenever you update the pod template. It also keeps a revision history so you can roll back if something goes wrong.
kubectl rollout history deployment/web-frontend
kubectl rollout undo deployment/web-frontend
kubectl rollout undo deployment/web-frontend --to-revision=2
Deployments work well for stateless applications where each replica is interchangeable. You do not need persistent storage or a fixed network identity for each replica.
Scaling a Deployment
kubectl scale deployment web-frontend --replicas=5
Or update the spec directly:
kubectl patch deployment web-frontend -p '{"spec":{"replicas":5}}'
The maxSurge and maxUnavailable fields control update behavior:
spec:
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1
maxUnavailable: 0
With these settings, Kubernetes adds one new pod before removing an old one. Zero downtime updates become straightforward.
Rolling Updates and Rollback Strategies
Rolling updates proceed pod by pod. Kubernetes replaces old pods with new ones while keeping enough replicas running to handle traffic.
The minReadySeconds field tells Kubernetes how long to wait before marking a pod as ready:
spec:
minReadySeconds: 30
progressDeadlineSeconds: 600
If a pod fails to become ready within progressDeadlineSeconds, Kubernetes stops the rollout and reports a condition.
For databases or stateful services, rolling updates do not work the same way. You need to think carefully about schema migrations and data consistency. This is where StatefulSets become relevant.
StatefulSet Identity and Stable Storage
StatefulSets give each pod a persistent identity. Pods have stable network names and persistent storage that survives restarts. This matters for clustered databases, message queues, and leader-elected services.
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: postgres-cluster
namespace: database
spec:
serviceName: "postgres-cluster"
replicas: 3
selector:
matchLabels:
app: postgres
template:
metadata:
labels:
app: postgres
spec:
containers:
- name: postgres
image: postgres:15
volumeMounts:
- name: data
mountPath: /var/lib/postgresql/data
volumeClaimTemplates:
- metadata:
name: data
spec:
accessModes: ["ReadWriteOnce"]
storageClassName: "fast-ssd"
resources:
requests:
storage: 50Gi
Each pod gets a predictable name: postgres-cluster-0, postgres-cluster-1, postgres-cluster-2. The volume claims persist even when pods reschedule to different nodes.
StatefulSets support ordered deployment and scaling. Pod 1 will not start until Pod 0 is running and ready. This ordering matters for primary-secondary database clusters where you need to establish quorum before adding replicas.
Managing StatefulSet Scaling
kubectl scale statefulset postgres-cluster --replicas=5
The new pods provision their own persistent volumes. Scaling down removes pods in reverse order, but persistent volumes do not get deleted automatically. You need to handle data migration before scaling down.
DaemonSet for Cluster-Wide Agents
A DaemonSet runs one pod on every node (or on nodes matching a selector). This makes sense for log collectors, monitoring agents, and node-level services that need to be present on every machine.
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: fluentd-log-collector
namespace: monitoring
spec:
selector:
matchLabels:
app: fluentd
template:
metadata:
labels:
app: fluentd
spec:
containers:
- name: fluentd
image: fluent/fluentd:v1.16
resources:
requests:
memory: "256Mi"
cpu: "250m"
limits:
memory: "512Mi"
cpu: "500m"
tolerations:
- key: "node-role.kubernetes.io/control-plane"
operator: "Exists"
effect: "NoSchedule"
Notice the toleration for control-plane nodes. By default, DaemonSets do not schedule onto control-plane nodes. Add tolerations if you need to run on those nodes too.
You can also restrict DaemonSets to specific nodes using node selectors or node affinity:
spec:
template:
spec:
nodeSelector:
disktype: ssd
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: "topology.kubernetes.io/zone"
operator: In
values:
- us-east-1a
DaemonSets automatically scale when you add nodes to the cluster. The controller creates pods on the new nodes without any additional action from you.
Choosing the Right Workload
The choice comes down to your application characteristics:
| Workload | When to Use | Identity | Storage |
|---|---|---|---|
| Deployment | Stateless services, web apps, APIs | None (pods are interchangeable) | Ephemeral |
| StatefulSet | Databases, message queues, leader-elected services | Stable network names, ordered deployment | Persistent via volumeClaimTemplates |
| DaemonSet | Node-level agents, log collectors, monitoring | None (one per node) | Depends on configuration |
Do not use a StatefulSet when a Deployment would suffice. StatefulSets add complexity around scaling, updates, and storage management. If your application does not need stable identity or persistent storage, stick with a Deployment.
For general guidance on Kubernetes architecture, see the Kubernetes fundamentals post. For more advanced patterns like custom controllers and operators, check the Advanced Kubernetes post.
Production Failure Scenarios
StatefulSet Fails to Scale Due to Volume Binding Issues
When a StatefulSet tries to scale up, the PVC provisioner may fail if the StorageClass cannot match the PVC requirements or the cloud quota is exhausted.
Symptoms: StatefulSet does not scale, Pod remains Pending, VolumeBinding unknown in describe output.
Diagnosis:
kubectl describe statefulset postgres-cluster -n database
kubectl get events --sort-by='.lastTimestamp' -n database
kubectl get pvc -n database # Check bound status
Mitigation: Pre-provision PVs for known storage needs. Set StorageClass volumeBindingMode: WaitForFirstConsumer to avoid cross-zone binding issues. Monitor cloud storage quotas.
DaemonSet Not Scheduling on Control-Plane Nodes
DaemonSets ignore control-plane nodes by default. The taint keeps your infrastructure pods off the control plane, which is usually what you want. But CNI plugins and some monitoring agents need to run there too.
Symptoms: kubectl get daemonset shows fewer nodes than your cluster has.
Mitigation: Add this toleration to your DaemonSet spec:
tolerations:
- key: "node-role.kubernetes.io/control-plane"
operator: "Exists"
effect: "NoSchedule"
Anti-Patterns
Using Deployment for Databases
This one comes up constantly in audits. Running PostgreSQL as a Deployment will eventually corrupt your data. Multiple pods trying to write to the same volume is not a safe setup. Use StatefulSet instead.
Not Setting minReadySeconds
If you skip minReadySeconds, Kubernetes considers a pod Ready the instant its containers start, not when the application inside is actually ready to handle traffic. Your load balancer will send requests to pods that are still spinning up.
Set minReadySeconds to at least your application’s worst-case startup time.
Setting Identical Requests and Limits for All Pods
Giving every pod the same resources is lazy. A web server and a batch job have completely different resource profiles. Profile your workloads first, then set appropriate values.
Quick Recap Checklist
Use this checklist when choosing and configuring Kubernetes workload resources:
- Chose Deployment for stateless services, StatefulSet for databases/stateful apps, DaemonSet for node agents
- Set
minReadySecondsto match application startup time - Configured
maxSurgeandmaxUnavailablefor zero-downtime rolling updates - Used StatefulSet with
volumeClaimTemplatesfor persistent storage needs - Added tolerations to DaemonSet if scheduling on control-plane nodes is needed
- Set resource requests and limits for all containers
- Used
readinessGatefor applications that need additional health checks beyond container-level probes - Tested scaling behavior in staging before production
- Monitored StatefulSet ordered scaling behavior when adding or removing replicas
Conclusion
Workload resources are the backbone of application management in Kubernetes. Deployments handle stateless applications with zero-downtime updates. StatefulSets provide stable identity and persistent storage for stateful clustered workloads. DaemonSets ensure node-level agents run everywhere in your cluster.
Start with a Deployment for anything stateless. Reach for a StatefulSet when you need ordered scaling, stable network identity, or persistent storage that survives pod restarts. Use a DaemonSet for logging, monitoring, and other node-level services that belong on every machine.
Understand these three workload types and you can deploy almost any application on Kubernetes with confidence.
Category
Related Posts
Container Security: Image Scanning and Vulnerability Management
Implement comprehensive container security: from scanning images for vulnerabilities to runtime security monitoring and secrets protection.
Deployment Strategies: Rolling, Blue-Green, and Canary Releases
Compare and implement deployment strategies—rolling updates, blue-green deployments, and canary releases—to reduce risk and enable safe production releases.
Developing Helm Charts: Templates, Values, and Testing
Create production-ready Helm charts with Go templates, custom value schemas, and testing using Helm unittest and ct.