Developing Helm Charts: Templates, Values, and Testing
Create production-ready Helm charts with Go templates, custom value schemas, and testing using Helm unittest and ct.
Creating Helm charts that others can confidently use requires attention to directory structure, template logic, validation, and testing. This guide walks through building charts that meet production standards.
Chart Directory Structure
Every Helm chart follows a predictable layout. At minimum, you need:
mychart/
├── Chart.yaml # Chart metadata and dependencies
├── values.yaml # Default configuration values
├── values.schema.json # Optional: JSON schema for values validation
├── templates/ # Kubernetes manifest templates
│ ├── deployment.yaml
│ ├── service.yaml
│ ├── _helpers.tpl # Named template definitions
│ └── NOTES.txt # Post-install instructions
└── tests/ # Test files
└── deployment-test.yaml
The Chart.yaml defines the chart itself:
apiVersion: v2
name: myapplication
description: A Helm chart for My Application
type: application
version: 1.0.0
appVersion: "2.1.0"
keywords:
- webapp
- api
home: https://myapp.example.com
sources:
- https://github.com/myorg/myapp
Template Functions and Sprig
Helm uses Go’s text/template engine extended with Sprig functions. Common categories:
String manipulation:
# values.yaml
releaseName: my-app
environment: production
# templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ .Release.Name }}-{{ .Values.nameOverride | default .Chart.Name }}
Logical operations:
{{- if .Values.replicaCount > 1 }}
replicas: {{ .Values.replicaCount }}
{{- end }}
{{- if eq .Values.environment "production" }}
strategy:
type: RollingUpdate
{{- end }}
Flow control:
{{- with .Values.image }}
image: "{{ .repository }}:{{ .tag }}"
{{- end }}
{{- range $key, $value := .Values.env }}
- name: {{ $key }}
value: {{ $value | quote }}
{{- end }}
Named Templates and Helpers
The _helpers.tpl file defines reusable templates. These keep your charts DRY and provide consistent naming conventions.
# _helpers.tpl
{{/*
Expand the name of the chart
*/}}
{{- define "mychart.name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" }}
{{- end }}
{{/*
Common labels
*/}}
{{- define "mychart.labels" -}}
app.kubernetes.io/name: {{ include "mychart.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/version: {{ .Chart.AppVersion }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
{{- end }}
{{/*
Selector labels
*/}}
{{- define "mychart.selectorLabels" -}}
app.kubernetes.io/name: {{ include "mychart.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
{{- end }}
Use these in your templates:
# templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: { { include "mychart.name" . } }
labels: { { - include "mychart.labels" . | nindent 4 } }
spec:
selector:
matchLabels: { { - include "mychart.selectorLabels" . | nindent 6 } }
Values Schema Validation
The values.schema.json enforces structure and types on user-provided values. This catches configuration errors before deployment.
{
"$schema": "http://json-schema.org/draft-07/schema#",
"title": "My Application",
"type": "object",
"properties": {
"image": {
"type": "object",
"properties": {
"repository": {
"type": "string",
"description": "Container image repository"
},
"tag": {
"type": "string"
},
"pullPolicy": {
"type": "string",
"enum": ["IfNotPresent", "Always", "Never"]
}
},
"required": ["repository", "tag"]
},
"replicaCount": {
"type": "integer",
"minimum": 1,
"maximum": 10,
"default": 1
},
"service": {
"type": "object",
"properties": {
"type": {
"type": "string",
"enum": ["ClusterIP", "NodePort", "LoadBalancer"]
},
"port": {
"type": "integer",
"minimum": 1,
"maximum": 65535
}
},
"required": ["type", "port"]
}
},
"required": ["image"]
}
When users provide invalid values, Helm reports the error clearly:
$ helm install myapp ./mychart -f values.yaml
Error: values validation error: replicaCount must be less than or equal to 10
Testing with Helm Unittest
The Helm unittest plugin runs tests defined in YAML files under the tests/ directory.
# tests/deployment_test.yaml
suite: Deployment suite
templates:
- deployment.yaml
tests:
- name: should create a deployment
asserts:
- isKind:
of: Deployment
- equal:
path: metadata.name
value: RELEASE-NAME-myapplication
- equal:
path: spec.replicas
value: 1
- name: should have correct labels
asserts:
- equal:
path: metadata.labels.app-kubernetes-io-name
value: myapplication
- name: should use the correct image
set:
image.repository: nginx
image.tag: 1.21
asserts:
- equal:
path: spec.template.spec.containers[0].image
value: nginx:1.21
Run tests with:
helm unittest ./mychart
For more comprehensive validation, consider ct (Chart Testing) which integrates with CI/CD pipelines and validates against Kubernetes cluster compatibility.
Publishing to Chart Repositories
When your chart is ready, package and publish it:
# Package the chart
helm package ./mychart
# If using ChartMuseum or a similar repo server:
curl -F "chart=@mychart-1.0.0.tgz" http://localhost:8080/api/charts
# For OCI-based registries:
helm chart save ./mychart myregistry.azurecr.io/mychart:1.0.0
helm chart push myregistry.azurecr.io/mychart:1.0.0
Store the index.yaml generated by your repo server. Users then add and install. For managing chart repositories at scale, see Helm Repository Management. For CI/CD integration patterns, see Designing Effective CI/CD Pipelines.
helm repo add myrepo https://myrepo.example.com
helm repo update
helm install myapp myrepo/mychart --version 1.0.0
When to Use / When Not to Use
When to build custom Helm charts
Reach for custom Helm charts when you need to package internal platform components that teams will reuse across projects. Database operators, messaging middleware, monitoring agents, and shared infrastructure services are all good candidates. If you find yourself copying YAML manifests between teams or repositories, that is a chart waiting to happen.
Chart development also makes sense for applications with complex multi-environment configuration. When dev, staging, and production differ in ways that cannot be expressed with simple values overrides, chart templates give you the control to handle that complexity cleanly.
When to skip custom charts
For one-off deployments that will never be reused, a plain Kubernetes manifest with kubectl apply is simpler and has less overhead. If your team is already standardized on a GitOps tool like ArgoCD with its own templating, adding Helm on top may be redundant.
Do not build a chart just because Helm is the trendy tool. A chart that wraps a single Deployment with no parameterization adds indirection without value.
Chart Development Lifecycle Flow
flowchart TD
A[Write Chart.yaml<br/>Define metadata] --> B[Create templates<br/>deployment.yaml, service.yaml]
B --> C[Add _helpers.tpl<br/>Named templates]
C --> D[Define values.yaml<br/>Default configuration]
D --> E[Add values.schema.json<br/>Validation]
E --> F[Write tests<br/>helm unittest]
F --> G{Tests pass?}
G -->|No| H[Fix templates<br/>or tests]
H --> F
G -->|Yes| I[Package & publish<br/>helm package]
I --> J[Lint & security scan<br/>helm lint, trivy]
J --> K[Add to chart repo<br/>or OCI registry]
Production Failure Scenarios
Template Rendering Failures
Go template errors in charts produce unhelpful messages at deployment time rather than development time. A missing closing bracket or incorrect Sprig function silently renders empty values.
# Always dry-run before installing
helm upgrade --install myapp ./mychart --dry-run --debug
# Catch schema errors early
helm lint ./mychart --strict
Test Coverage Gaps
Tests that only verify happy paths miss regressions in edge cases. If your chart has conditional resources (ingress, PVCs, init containers), write tests for both enabled and disabled states.
# Test that ingress is NOT rendered when disabled
templates:
- ingress.yaml
tests:
- name: should not render ingress when disabled
set:
ingress.enabled: false
asserts:
- isNull:
path: spec
Version Drift in Dependencies
Charts that depend on external charts from Bitnami or other public repositories can break when those dependencies release new versions. A chart that worked last month may fail this month because a sub-chart changed its value structure.
Always run helm dependency update in CI and commit the resulting Chart.lock. Pin exact versions, not version ranges.
Release Name Collisions
Helm releases are identified by name within a namespace. Two helm install commands with the same name overwrite each other. The --generate-name flag or namespaced release naming conventions prevent accidental overwrites.
Resource Scope Mistakes
A chart that creates cluster-scoped resources (like CustomResourceDefinitions or cluster roles) cannot be installed into a single namespace. If your chart needs both namespace-scoped and cluster-scoped resources, document this requirement explicitly.
Observability Hooks
Track chart rendering and deployment health with these observability practices.
Template Debugging
# Render locally without installing
helm template myapp ./mychart --debug
# Inspect the full rendered manifest
helm template myapp ./mychart | kubectl apply --dry-run=server
# Watch what Helm does step by step
helm upgrade --install myapp ./mychart --dry-run --debug --replace
Release Introspection
# See all values passed to a release
helm get values myapp --all
# View the rendered templates for a live release
helm get manifest myapp
# Check release history and status
helm history myapp
helm status myapp
CI/CD Validation Pipeline
# Example CI pipeline for chart development
- name: Lint and test
run: |
helm lint ./mychart --strict
helm unittest ./mychart
ct lint --charts ./mychart
- name: Security scan
run: |
trivy chart ./mychart
helm cm-lint ./mychart
- name: Render validation
run: |
helm template myapp ./mychart --debug
Common Pitfalls / Anti-Patterns
Overly generic values names
Naming values value1, value2 instead of replicaCount, imageTag makes charts impossible to use without reading the source.
Hardcoding release name
Using .Release.Name directly instead of through a helper means the chart only works when installed with a specific release name pattern.
Missing default values
Omitting defaults from values.yaml forces users to provide all values, even for optional settings. Always provide sensible defaults.
Not using JSON schema validation
Without values.schema.json, invalid values fail at template render time with confusing Go template errors. Schema validation catches mistakes immediately with clear messages.
Forgetting hook idempotency
Hooks that run Jobs or Pods must be designed to run multiple times without creating duplicate resources. Use hook-delete-policy: before-hook-creation and make migration scripts idempotent.
Chart Development Trade-offs
Building a Helm chart involves trade-offs between flexibility, complexity, and maintainability.
| Approach | When to Use | Trade-offs |
|---|---|---|
| Simple values with few conditionals | Single application, few environments | Works until configuration complexity grows |
| Extensive template logic with named templates | Large charts with complex conditional resources | Templates become hard to read and debug |
| JSON schema validation | Charts used by multiple teams | Schema changes require chart version bumps |
| Library charts for shared templates | Platform teams standardizing patterns | Version synchronization across teams adds overhead |
| Helm unittest for test coverage | Charts with conditional resources or complex logic | Tests slow down chart development; need CI integration |
| ChartMuseum for internal repos | Single team or organization | ChartMuseum requires maintenance; no built-in image registry |
| OCI artifacts for charts | Teams already using OCI registries | Requires Helm 3.8+; less mature ecosystem support |
The practical rule: start simple. Add template complexity only when the duplication becomes unmanageable. Add schema validation when the chart will be used by others. Add testing when the chart has multiple conditional resources that could break in unexpected combinations.
Quick Recap
Key Takeaways
- Directory structure, named templates, and values schema validation form the foundation of maintainable charts
- Helm unittest and ct provide test coverage that catches regressions before users encounter them
- Always dry-run and lint in CI before publishing
- Chart dependencies need locked versions to prevent supply chain breakages
Development Workflow Checklist
# 1. Create chart
helm create ./mychart
# 2. Add templates, values, and helpers
# 3. Add JSON schema validation
# Edit values.schema.json
# 4. Write tests
mkdir tests && vim tests/deployment_test.yaml
# 5. Run tests
helm unittest ./mychart
# 6. Lint
helm lint ./mychart --strict
# 7. Package
helm package ./mychart
# 8. Install from local chart
helm upgrade --install myapp ./mychart-1.0.0.tgz --dry-run
For more on Helm basics, see our Helm Charts guide. If you are interested in GitOps-style chart management, our GitOps article covers declarative deployment patterns.
Category
Related Posts
Helm Versioning and Rollback: Managing Application Releases
Master Helm release management—revision history, automated rollbacks, rollback strategies, and handling failed releases gracefully.
Helm Charts: Templating, Values, and Package Management
Learn Helm Charts from basics to advanced patterns. Master Helm templates, values management, chart repositories, and production deployment workflows.
Container Security: Image Scanning and Vulnerability Management
Implement comprehensive container security: from scanning images for vulnerabilities to runtime security monitoring and secrets protection.