Skip to content
ioob.dev
Go back

Kubernetes Beginner Series 12 — Helm and Package Management

· 7 min read
Kubernetes Series (12/12)
  1. Kubernetes Beginner Series 1 — What Is Kubernetes
  2. Kubernetes Beginner Series 2 — Cluster Architecture
  3. Kubernetes Beginner Series 3 — Pod
  4. Kubernetes Beginner Series 4 — Controllers
  5. Kubernetes Beginner Series 5 — Services and Networking
  6. Kubernetes Beginner Series 6 — Ingress and Gateway API
  7. Kubernetes Beginner Series 7 — ConfigMap and Secret
  8. Kubernetes Beginner Series 8 — Storage: PV, PVC, StorageClass
  9. Kubernetes Beginner Series 9 — Resource Management and Autoscaling
  10. Kubernetes Beginner Series 10 — RBAC and Security: The Principle of Least Privilege
  11. Kubernetes Beginner Series 11 — Observability: Logs, Metrics, and Traces
  12. Kubernetes Beginner Series 12 — Helm and Package Management
Table of contents

Table of contents

Too Many YAMLs

Getting to this point, we’ve created countless YAML files. Deployment, Service, ConfigMap, Secret, Ingress, PVC, HPA, NetworkPolicy. Ten files for a single service is nothing unusual. Duplicating all of this across namespaces and environments (dev/stage/prod) while tweaking values here and there is simply not sustainable.

In the past, people used sed for substitutions or Kustomize for patching — everyone found their own way. Over time, the de facto standard packaging tool in the Kubernetes ecosystem became Helm. It bundles the multiple Kubernetes resources that make up an application into a unit called a Chart, and separates per-environment configuration through values.yaml.

flowchart LR
    A[Chart<br/>Templates + defaults] --> C[helm install/upgrade]
    B[values.yaml<br/>Per-environment config] --> C
    C --> D[Rendered<br/>YAML]
    D --> E[Kubernetes<br/>Cluster]

In one sentence: Helm is apt, yum, or brew for Kubernetes. It injects values into parameterized templates to produce actual manifests and manages that state as a “Release.”

Installation and Your First Release

First, install the Helm CLI:

# macOS
brew install helm

# Linux
curl https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 | bash

Add the official repository and install a simple chart. Let’s create our first release with the widely used bitnami/nginx chart:

helm repo add bitnami https://charts.bitnami.com/bitnami
helm repo update

# Install with release name "my-nginx"
helm install my-nginx bitnami/nginx --namespace web --create-namespace

# Check installed releases
helm list -n web
# NAME       NAMESPACE  REVISION  STATUS    CHART        APP VERSION
# my-nginx   web        1         deployed  nginx-18.x.x 1.27.x

Now the Nginx-related resources are deployed to Kubernetes. The Deployment, Service, ServiceAccount, ConfigMap, and all other resources defined by the chart are created at once.

# See what resources were created
kubectl get all -n web

# View the rendered manifests (for debugging)
helm get manifest my-nginx -n web

Let’s change some values and upgrade:

helm upgrade my-nginx bitnami/nginx \
  --namespace web \
  --set replicaCount=3 \
  --set service.type=LoadBalancer

# Release history
helm history my-nginx -n web
# REVISION  UPDATED       STATUS      CHART         DESCRIPTION
# 1         Apr 20 01:22  superseded  nginx-18.x.x  Install complete
# 2         Apr 20 01:24  deployed    nginx-18.x.x  Upgrade complete

If something goes wrong, you can roll back to a previous version:

helm rollback my-nginx 1 -n web

Helm stores the state of each release as a Kubernetes Secret. So the history shown by helm history is actually a record stored within the cluster itself.

Internal Structure of a Chart

Running helm create to scaffold an empty chart reveals the structure at a glance:

helm create myapp
tree myapp
myapp/
├── Chart.yaml          # Chart metadata
├── values.yaml         # Default values
├── charts/             # Where dependency subcharts go
├── templates/
│   ├── deployment.yaml
│   ├── service.yaml
│   ├── ingress.yaml
│   ├── serviceaccount.yaml
│   ├── hpa.yaml
│   ├── _helpers.tpl    # Template functions/helpers
│   ├── NOTES.txt       # Message displayed after installation
│   └── tests/          # For helm test
└── .helmignore

The first file to look at is Chart.yaml. It carries the chart’s identity:

apiVersion: v2
name: myapp
description: My application
type: application
version: 0.1.0         # Chart version (SemVer)
appVersion: "1.0.0"    # App version (string, informational)

dependencies:
  - name: postgresql
    version: "15.x.x"
    repository: "https://charts.bitnami.com/bitnami"
    condition: postgresql.enabled

Increment version when the chart structure changes, while appVersion reflects the version of the bundled application. They’re separated for a reason: even with the same app version, the chart structure (e.g., template changes) can evolve.

With dependencies, you can bundle other charts as subcharts. Adding a condition lets you toggle them on/off via values.

templates and Go Template Syntax

The files in templates/ are the heart of Helm. They look like ordinary YAML, but Go template syntax is mixed in, allowing value injection and conditional/loop logic.

# templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: {{ include "myapp.fullname" . }}
  labels:
    {{- include "myapp.labels" . | nindent 4 }}
spec:
  replicas: {{ .Values.replicaCount }}
  selector:
    matchLabels:
      {{- include "myapp.selectorLabels" . | nindent 6 }}
  template:
    metadata:
      labels:
        {{- include "myapp.selectorLabels" . | nindent 8 }}
    spec:
      containers:
        - name: {{ .Chart.Name }}
          image: "{{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}"
          imagePullPolicy: {{ .Values.image.pullPolicy }}
          ports:
            - containerPort: {{ .Values.service.port }}
          {{- if .Values.resources }}
          resources:
            {{- toYaml .Values.resources | nindent 12 }}
          {{- end }}
          {{- with .Values.env }}
          env:
            {{- range . }}
            - name: {{ .name }}
              value: {{ .value | quote }}
            {{- end }}
          {{- end }}

The syntax feels unfamiliar at first, but a few patterns cover most cases:

And _helpers.tpl defines commonly used expressions as reusable functions:

# _helpers.tpl
{{- define "myapp.fullname" -}}
{{- $name := default .Chart.Name .Values.nameOverride }}
{{- if contains $name .Release.Name }}
{{- .Release.Name | trunc 63 | trimSuffix "-" }}
{{- else }}
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" }}
{{- end }}
{{- end }}

{{- define "myapp.labels" -}}
helm.sh/chart: {{ printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" }}
app.kubernetes.io/name: {{ .Chart.Name }}
app.kubernetes.io/instance: {{ .Release.Name }}
{{- if .Chart.AppVersion }}
app.kubernetes.io/version: {{ .Chart.AppVersion | quote }}
{{- end }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
{{- end }}

The default helpers generated by helm create are already quite usable. When creating your first chart, start with these and just add the resources you need.

values.yaml — The Hub for Per-Environment Configuration

values.yaml contains the default values that templates reference. Users override these values to apply different configurations per environment.

# values.yaml
replicaCount: 1

image:
  repository: myapp
  tag: ""
  pullPolicy: IfNotPresent

service:
  type: ClusterIP
  port: 8080

resources:
  requests:
    cpu: 100m
    memory: 128Mi
  limits:
    cpu: 500m
    memory: 512Mi

env:
  - name: LOG_LEVEL
    value: info

autoscaling:
  enabled: false
  minReplicas: 1
  maxReplicas: 10
  targetCPU: 70

ingress:
  enabled: false
  className: nginx
  hosts:
    - host: myapp.local
      paths:
        - path: /
          pathType: Prefix

To use different values per environment, manage them in separate files:

# values-prod.yaml
replicaCount: 5

image:
  tag: "1.2.3"

resources:
  requests:
    cpu: 500m
    memory: 1Gi
  limits:
    cpu: 2
    memory: 4Gi

env:
  - name: LOG_LEVEL
    value: warn

autoscaling:
  enabled: true
  minReplicas: 5
  maxReplicas: 50

ingress:
  enabled: true
  hosts:
    - host: api.example.com
      paths:
        - path: /
          pathType: Prefix

Specify files with -f during installation, or override individual values with --set:

# Production deployment
helm upgrade --install myapp ./myapp \
  -n production --create-namespace \
  -f values-prod.yaml

# Override a specific value
helm upgrade --install myapp ./myapp \
  -n production \
  -f values-prod.yaml \
  --set image.tag=1.2.4

When multiple -f flags are provided, later files override earlier ones. --set has even higher priority. Leveraging this precedence, a common pattern is to layer values as common -> per-environment -> per-deployment.

Preview Rendering and Debugging

You can preview the rendered result before actually installing:

# Output the final rendered result to stdout
helm template myapp ./myapp -f values-prod.yaml

# Dry run without installing
helm install myapp ./myapp -f values-prod.yaml --dry-run --debug

# Syntax validation
helm lint ./myapp

If there’s a mistake in the template, --debug shows detailed error locations. When developing a new chart, iteratively checking the output with helm template and fixing issues is the most efficient approach.

The Release Lifecycle

What Helm calls a “Release” is the result instance of installing a chart onto an actual cluster. You can install the same chart multiple times under different names and manage each independently.

# Install the same chart twice
helm install myapp-a ./myapp -f values-a.yaml -n tenant-a
helm install myapp-b ./myapp -f values-b.yaml -n tenant-b

# All releases
helm list -A

# Delete a specific release
helm uninstall myapp-a -n tenant-a

# Keep history instead of deleting everything
helm uninstall myapp-a -n tenant-a --keep-history

Helm stores release state in Kubernetes Secrets (sh.helm.release.v1.<name>.<rev>). This means the tracked state and the actual cluster state can drift. Use helm get all <release> to check what Helm believes the state to be.

Packaging and Sharing Charts

To share a chart you’ve created with your team or externally, package it and upload it to a repository:

# Package the chart as a tar.gz
helm package ./myapp
# Creates myapp-0.1.0.tgz

# Generate a repository index
helm repo index . --url https://charts.example.com
# Creates index.yaml

Place the generated .tgz and index.yaml somewhere HTTP-accessible (S3, GitHub Pages, internal web server) and you have a private repository. These days, pushing directly to OCI registries (ECR, GHCR, etc.) has also been standardized and is widely used.

# Push to an OCI registry
helm push myapp-0.1.0.tgz oci://registry.example.com/charts

# Install from an OCI registry
helm install myapp oci://registry.example.com/charts/myapp --version 0.1.0

Once the chart is in a repository, you can easily integrate helm upgrade --install into your CI pipeline for deployment.

Leveraging Official Chart Repositories

Most commonly used applications in production have well-maintained official charts. Use these instead of building from scratch.

AreaRepresentative Charts
Observabilityprometheus-community/kube-prometheus-stack, grafana/loki, grafana/tempo
Ingressingress-nginx/ingress-nginx, traefik/traefik
Certificatesjetstack/cert-manager
Databasesbitnami/postgresql, bitnami/redis, bitnami/mongodb
Message queuesbitnami/kafka, bitnami/rabbitmq
Secretsexternal-secrets/external-secrets
GitOpsargo/argo-cd

These charts incorporate lessons from countless production deployments. Configured with the right options, they can get you to production-ready state quickly. However, don’t blindly trust them as black boxes. Always run helm template at least once to review what resources are being created. Default values are surprisingly often a mismatch for your environment.

GitOps and Helm

As covered in the ArgoCD series, in a GitOps environment you don’t run helm install directly. You configure an Application CR to reference the chart repo and values, and ArgoCD handles the rendering and syncing automatically.

apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: myapp
  namespace: argocd
spec:
  project: default
  source:
    repoURL: https://charts.example.com
    chart: myapp
    targetRevision: 0.1.0
    helm:
      valueFiles:
        - values-prod.yaml
  destination:
    server: https://kubernetes.default.svc
    namespace: production
  syncPolicy:
    automated:
      prune: true
      selfHeal: true

With this pattern, the need to run the helm CLI directly diminishes. Helm’s role shifts to being a template engine and package format, while the actual deployment is handled by the GitOps controller.

Alternatives to Helm

Helm isn’t the only option. Each has its own strengths and weaknesses — choose based on your situation:

For beginners, the Helm + Kustomize combination is the most common and has the most documentation. Using Helm for official charts and Kustomize for internal manifests is a pattern frequently seen in practice.

Wrapping Up the Series

Over 12 parts, we’ve surveyed the major concepts of Kubernetes. Looking back, it was a journey like this:

  1. Pods and Containers — The smallest deployable unit
  2. Deployments and ReplicaSets — Declarative rollouts
  3. Services and Networking — Bundling Pods into endpoints
  4. Ingress — The gateway for external traffic
  5. Namespaces and Labels — Axes for organizing resources
  6. StatefulSets and DaemonSets — Specialized workloads
  7. ConfigMaps and Secrets — Separating configuration and sensitive data
  8. PV/PVC and Storage — Persistent data
  9. Resource Management and Autoscaling — requests/limits/HPA
  10. RBAC and Security — The principle of least privilege
  11. Observability — Logs/metrics/traces
  12. Helm and Package Management — Bundling it all together

When first learning Kubernetes, the sheer number of concepts can feel overwhelming. But once you’ve deployed a few services, experienced failures, and gone through scale-out events, you start to understand viscerally why each resource exists.

Here are some recommended next steps:

Kubernetes is a vast and deep ecosystem. This series covered only the entrance. From here, extend into the depths that match your service’s specific needs.


Related Posts

Share this post on:

Comments

Loading comments...


Previous Post
Kubernetes Beginner Series 11 — Observability: Logs, Metrics, and Traces
Next Post
Terraform Part 1 — What Is Terraform