Kustomize: Native Kubernetes Configuration Management
Use Kustomize for declarative Kubernetes configuration management without Helm's templating—overlays, patches, and environment-specific customization.
Kustomize offers a different approach to Kubernetes configuration management. Instead of templating, it uses overlays and patches to transform base configurations. This guide covers Kustomize fundamentals and when to choose it over Helm.
When to Use / When Not to Use
When Kustomize makes sense
Kustomize is a good fit when your team owns the application manifests directly and wants environment-specific variations without introducing a new templating layer. If your dev, staging, and production configs differ in straightforward ways (replica counts, image tags, namespace names), overlays let you express those differences clearly without Go template syntax.
GitOps workflows benefit from Kustomize because the source YAML stays readable and diffable. You do not need to mentally render a template to understand what will be deployed.
For platform teams building reusable bases that other teams consume as a starting point, Kustomize components provide composition without publishing packages.
When to choose Helm instead
If you need to distribute a reusable package to users who should not see the underlying template logic, Helm charts are better. The extensive ecosystem of Bitnami and public charts gives you off-the-shelf solutions for databases, caches, and middleware.
When your configuration varies in complex ways that do not map cleanly to overlays and patches, Helm templating offers more expressive power.
Kustomize Workflow Flow
flowchart TD
A[base/<br/>kustomization.yaml] --> B[Overlays]
B --> C[development/<br/>kustomization.yaml]
B --> D[staging/<br/>kustomization.yaml]
B --> E[production/<br/>kustomization.yaml]
C --> F[kubectl kustomize<br/>./overlays/development]
D --> G[kubectl kustomize<br/>./overlays/staging]
E --> H[kubectl kustomize<br/>./overlays/production]
F --> I[Transformed<br/>YAML manifests]
G --> I
H --> I
Kustomize Overview and kubectl Integration
Kustomize is built into kubectl since Kubernetes 1.14, so you do not need separate installation. It reads kustomization.yaml files and produces transformed Kubernetes manifests.
# Basic usage with kubectl
kubectl apply -k ./overlays/production
# Build without applying
kubectl kustomize ./base
# View diff before applying
kubectl diff -k ./overlays/production
A simple Kustomize structure:
app/
├── base/
│ ├── kustomization.yaml
│ ├── deployment.yaml
│ └── service.yaml
└── overlays/
├── development/
│ └── kustomization.yaml
└── production/
└── kustomization.yaml
Base and Overlay Structure
The base directory contains your canonical configuration. Overlays modify the base for specific environments.
Base kustomization.yaml:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- deployment.yaml
- service.yaml
commonLabels:
app.kubernetes.io/part-of: myapp
images:
- name: nginx
newTag: "1.21"
Base deployment.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp
spec:
replicas: 3
selector:
matchLabels:
app: myapp
template:
metadata:
labels:
app: myapp
spec:
containers:
- name: nginx
image: nginx:1.21
ports:
- containerPort: 80
Development overlay:
# overlays/development/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
bases:
- ../../base
namespace: myapp-dev
namePrefix: dev-
commonLabels:
environment: development
replicas:
- name: myapp
count: 1
images:
- name: nginx
newTag: "1.21-debug"
Production overlay:
# overlays/production/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
bases:
- ../../base
namespace: myapp-prod
namePrefix: prod-
commonLabels:
environment: production
replicas:
- name: myapp
count: 5
images:
- name: nginx
newTag: "1.21.1"
Patches and Strategic Merge
Kustomize supports two patch strategies: Strategic Merge Patches (similar to kubectl patch) and JSON 6902 patches (RFC 6902 JSON Pointer).
Strategic merge patches:
# patches/replica-change.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp
spec:
replicas: 10
# kustomization.yaml
patches:
- path: patches/replica-change.yaml
JSON 6902 patches for fine-grained control:
# patches/resource-limits.json
[{
"op": "add",
"path": "/spec/template/spec/containers/0/resources/limits/memory",
"value": "512Mi"
}]
# kustomization.yaml
patches:
- target:
kind: Deployment
name: myapp
path: patches/resource-limits.json
Targeted patches:
# Patch only resources matching labels
patches:
- patch: |-
- op: replace
path: /spec/replicas
value: 3
target:
labelSelector: tier=frontend
Generators (ConfigMap and Secret)
Kustomize can generate ConfigMaps and Secrets from files, literals, or env files.
Literal-based ConfigMap:
# kustomization.yaml
configMapGenerator:
- name: app-config
literals:
- DATABASE_HOST=localhost
- DATABASE_PORT=5432
- LOG_LEVEL=info
File-based ConfigMap:
# config/app.properties
database.url=jdbc:postgresql://localhost:5432/mydb
cache.enabled=true
# kustomization.yaml
configMapGenerator:
- name: app-config
files:
- config/app.properties
Env file ConfigMap:
# envvars.txt
PORT=8080
WORKERS=4
# kustomization.yaml
configMapGenerator:
- name: app-env
envs:
- envvars.txt
Generate Secret:
configMapGenerator:
- name: app-secret
literals:
- api-key=$(API_KEY)
envs:
- secrets.txt
Kustomize Components for Reuse
Components allow reusable configuration blocks that can be imported across multiple kustomizations.
Define a monitoring component:
# components/monitoring/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1alpha
kind: Component
resources:
- deployment-metrics.yaml
- service-monitor.yaml
patches:
- patch: |-
- op: add
path: /spec/template/spec/containers/-
value:
name: prometheus
image: prometheus:latest
ports:
- containerPort: 9090
target:
kind: Deployment
Use the component:
# overlays/production/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
bases:
- ../../base
components:
- ../../components/monitoring
patches:
- path: patches/production-config.yaml
Kustomize vs Helm Comparison
Both tools solve configuration management but with different philosophies:
| Aspect | Kustomize | Helm |
|---|---|---|
| Approach | Overlay/patch | Template with values |
| Learning curve | Lower (no new syntax) | Higher (Go templates) |
| Flexibility | Limited to kustomize features | Highly flexible |
| Debugging | Direct YAML output | Rendered templates |
| Secret management | Built-in generators | External tools needed |
| Ecosystem | Native to Kubernetes | Large chart repository |
| Best for | App-centric deployments | Off-the-shelf packages |
Choose Kustomize when you want direct control over your YAML and do not need to package complex application logic. Choose Helm when you want to distribute reusable packages or need the extensive ecosystem of public charts.
Production Failure Scenarios
Name prefix collisions across teams
If two teams both use namePrefix: dev- in their overlays and deploy to the same cluster, resources can collide. The dev-api Deployment from team A overwrites the dev-api Deployment from team B.
Use team-specific prefixes or deploy to isolated namespaces.
Patch targets wrong resources
Strategic merge patches apply to all resources matching the selector. If you have multiple Deployments with the same labels, a replica patch affects all of them simultaneously.
Always use precise target selectors:
patches:
- patch: |-
- op: replace
path: /spec/replicas
value: 10
target:
name: myapp
kind: Deployment
ConfigMap generator creates new resources on every run
Kustomize generates ConfigMaps with content hashes in the name by default. Changing a ConfigMap literal creates a new ConfigMap with a different hash and deletes the old one. If a Deployment references the old ConfigMap name, the old ConfigMap cannot be deleted because the Deployment still needs it.
Pin ConfigMap generator outputs explicitly or use behavior: merge to update existing ConfigMaps rather than replacing them.
Overlay path not found
If you reference ../../base in an overlay and the path is wrong, Kustomize fails with a confusing error about a missing file. The error message does not clearly indicate it is a path resolution problem.
Use absolute paths or ensure your relative path references are correct before running in CI.
Common Pitfalls / Anti-Patterns
Overly deep directory nesting
Nesting overlays five levels deep (base, then environment, then region, then cluster, then tenant) makes it impossible to understand what the final manifest looks like without running kubectl kustomize. Keep nesting shallow and use components for composition.
Duplicate resources across overlays
If base defines a resource and an overlay also defines the same resource without using patches or strategic merge, Kustomize produces duplicate resource errors. This is especially confusing because the error appears at apply time, not at kustomize build time.
Audit your overlays regularly with kubectl kustomize ./overlay | kubectl apply --dry-run=server to catch duplicates.
Not using —dry-run in CI
Deploying without testing the kustomize output first means you find out about problems after they happen in production. Always run kubectl diff -k or kubectl kustomize | kubectl apply --dry-run=server in CI before applying.
Missing version pinning for kustomize
Kustomize versions differ in behavior. A kustomization that works with kustomize 3.8 may fail or produce different output with kustomize 5.0. Pin your kustomize version in CI and align it with your cluster’s kubectl version.
Observability Hooks
Track the health and correctness of your Kustomize deployments with these practices.
What to monitor:
- Kustomize build success/failure rate in CI
- Time taken for
kubectl kustomizeto complete across environments - Number of resources generated per overlay (sudden changes indicate unintended patches)
- YAML validation errors caught in CI vs at apply time
CI/CD observability:
# Example: Kustomize build step with error capture
- name: Build Kustomize manifests
id: kustomize-build
run: |
mkdir -p build
kubectl kustomize ./overlays/production > build/manifests.yaml
RESOURCE_COUNT=$(kubectl kustomize ./overlays/production | kubectl apply --dry-run=server -f - 2>&1 | grep -c "created\|configured")
echo "resources::$RESOURCE_COUNT"
# Detect significant changes
kubectl kustomize ./overlays/production | sha256sum > build/prod-checksum.txt
continue-on-error: false
# Track diff size
- name: Check manifest diff
run: |
kubectl kustomize ./overlays/production > build/new.yaml
DIFF_COUNT=$(diff -u build/base.yaml build/new.yaml | grep -c "^[+-]")
echo "Changed lines: $DIFF_COUNT"
if [ "$DIFF_COUNT" -gt 100 ]; then
echo "WARNING: Large diff detected, verify intentional changes"
fi
Debugging commands:
# View all generated resources
kubectl kustomize ./overlays/production
# Validate against cluster without applying
kubectl apply --dry-run=server -k ./overlays/production
# Check what will change (diff mode)
kubectl diff -k ./overlays/production
# Count resources generated
kubectl kustomize ./overlays/production | kubectl apply --dry-run=server -f - 2>&1 | grep -c "resource"
# Validate YAML syntax
kubectl kustomize ./overlays/production | kubectl validate --ignore-missing-schema
# Check for deprecated APIs
kubectl kustomize ./overlays/production | kubectl-convert --dry-run=client -o yaml | head -20
Alert on deployment anomalies:
# Alert if a deployment has significantly more/fewer resources than expected
- alert: KustomizeResourceCountAnomaly
expr: |
(count(kustomize_generated_resources{env="production"}) by (app)
/ avg(count(kustomize_generated_resources{env="production"}) by (app))) > 1.5
labels:
severity: warning
annotations:
summary: "Kustomize resource count anomaly for {{ $labels.app }}"
description: "Production overlay generating {{ $value }}x more resources than average. Verify intentional changes."
# Alert if kustomize builds fail repeatedly in CI
- alert: KustomizeBuildFailures
expr: increase(kustomize_build_errors_total[10m]) > 3
labels:
severity: critical
annotations:
summary: "Multiple kustomize build failures detected"
description: "{{ $value }} kustomize build failures in the last 10 minutes. Check CI logs."
Quick Recap
Key Takeaways
- Kustomize uses overlays and patches instead of templates, keeping YAML readable and diffable
- The built-in
kubectl kustomizerequires no separate installation (Kubernetes 1.14+) - Name prefixes and common labels help standardize resources across environments
- Components provide reusable configuration blocks without publishing packages
- Choose Kustomize for app-centric GitOps; choose Helm for distributable packages
Kustomize Checklist
# Test kustomize output without applying
kubectl kustomize ./overlays/production
# Diff against cluster state
kubectl diff -k ./overlays/production
# Apply with dry-run to catch errors
kubectl apply -k ./overlays/production --dry-run=server
# Validate build in CI
kustomize build ./overlays/production | kubeval --strict
Trade-off Summary
| Aspect | Kustomize | Helm | plain YAML |
|---|---|---|---|
| Templating | No (patches/overlays) | Yes (Go templates) | No |
| Readability | High (plain YAML) | Medium (templates) | Highest |
| Reusability | Git-based | Chart repos | None |
| Secret handling | Sealed secrets + generators | helm-secrets | Manual |
| GitOps integration | Native kubectl | Flux/ArgoCD | Flux/ArgoCD |
| Debugging | Diff is clear | Render first | Plain kubectl |
| Ecosystem | Growing | Massive (Bitnami) | N/A |
Conclusion
Kustomize provides a template-free approach to Kubernetes configuration that works well with GitOps workflows. Its overlay system makes environment promotion straightforward, and generators simplify ConfigMap and Secret creation. For teams already using GitOps, Kustomize pairs naturally with tools like ArgoCD. See our GitOps article for deployment patterns, and the Helm Charts guide for the alternative templating approach. For managing containers at scale, see Container Registry.
Category
Related Posts
GitOps: Declarative Deployments with ArgoCD and Flux
Implement GitOps for declarative, auditable infrastructure and application deployments using ArgoCD or Flux as your deployment operator.
ConfigMaps and Secrets: Managing Application Configuration in Kubernetes
Inject configuration data and sensitive information into Kubernetes pods using ConfigMaps and Secrets. Learn about mounting strategies, environment variables, and security best practices.
GitOps: Infrastructure as Code with Git for Microservices
Discover GitOps principles and practices for managing microservices infrastructure using Git as the single source of truth.