Table of contents
- The Problem with Multiple Environments
- Kustomize — Separating Environments with Overlays
- Helm — Separating Configurations with Values
- Kustomize vs Helm — When to Use Which
- Per-Environment Deployment Strategy
The Problem with Multiple Environments
In Part 3, we registered a single Application. But in practice, there are multiple environments like dev, staging, and production, each with slightly different configurations. Replica counts, image tags, resource limits, ConfigMap values — all of these need to vary per environment.
The simplest approach is to copy the entire set of manifests for each environment. But then every time you modify the shared parts, you have to update the files for every environment, and if you miss one, you end up with inconsistencies between environments. It’s a maintenance nightmare.
Kustomize and Helm solve this problem in different ways. And ArgoCD natively supports both tools.
Kustomize — Separating Environments with Overlays
Kustomize is such a standard tool in the Kubernetes ecosystem that it’s built into kubectl. The core idea is having base manifests and overlaying only the per-environment differences on top.
Directory Structure
A typical Kustomize project structure looks like this:
k8s/
├── base/
│ ├── kustomization.yaml
│ ├── deployment.yaml
│ └── service.yaml
├── overlays/
│ ├── dev/
│ │ ├── kustomization.yaml
│ │ └── patch-replicas.yaml
│ └── prod/
│ ├── kustomization.yaml
│ ├── patch-replicas.yaml
│ └── patch-resources.yaml
Common manifests go in base/, and per-environment changes are defined as patches in overlays/dev/ and overlays/prod/.
Writing the Base
The base is almost identical to the manifests we created in Part 3. Just add a kustomization.yaml.
# k8s/base/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- deployment.yaml
- service.yaml
The resources field lists the manifests managed in this directory. Kustomize uses this file to determine which resources to process.
Dev Overlay
For the dev environment, we want to reduce the replicas to 1. Create a patch file.
# k8s/overlays/dev/patch-replicas.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-demo
spec:
replicas: 1
Write a kustomization.yaml that applies this patch to the base.
# k8s/overlays/dev/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- ../../base
patches:
- path: patch-replicas.yaml
namePrefix: dev-
commonLabels:
env: dev
resources references ../../base to pull in the base manifests, and patches overlays the changes. Using namePrefix adds dev- to the front of all resource names, making it easy to distinguish between environments.
You can preview the final result with kubectl kustomize.
kubectl kustomize k8s/overlays/dev
This command doesn’t actually apply anything — it just outputs the final manifest with the base and overlay merged. It’s useful for verifying the expected result before deployment.
Prod Overlay
The production environment is a bit more complex. We increase replicas to 3 and add resource limits.
# k8s/overlays/prod/patch-replicas.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-demo
spec:
replicas: 3
# k8s/overlays/prod/patch-resources.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-demo
spec:
template:
spec:
containers:
- name: nginx
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 250m
memory: 256Mi
# k8s/overlays/prod/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- ../../base
patches:
- path: patch-replicas.yaml
- path: patch-resources.yaml
namePrefix: prod-
commonLabels:
env: prod
The base is shared — only the overlays differ. Modifying the common parts propagates to all environments, and only the environment-specific differences need to be managed in the overlays.
Registering a Kustomize App in ArgoCD
Here’s the flow of how ArgoCD detects a Kustomize app, renders it, and applies it to the cluster.
sequenceDiagram
participant Git as Git Repository
participant Repo as argocd-repo-server
participant Ctrl as application-controller
participant K8s as Kubernetes API
Ctrl->>Repo: Request overlays/dev rendering
Repo->>Git: clone & checkout
Repo->>Repo: kustomize build (base + patches)
Repo-->>Ctrl: Return final manifests
Ctrl->>K8s: diff (desired vs live)
Ctrl->>K8s: apply (patch/create/delete)
K8s-->>Ctrl: Status update
ArgoCD automatically detects Kustomize. Just point the path to the directory containing kustomization.yaml.
Here’s how to create the dev environment Application:
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: nginx-demo-dev
namespace: argocd
spec:
project: default
source:
repoURL: https://github.com/<your-username>/argocd-example.git
targetRevision: HEAD
path: k8s/overlays/dev
destination:
server: https://kubernetes.default.svc
namespace: dev
syncPolicy:
automated:
prune: true
selfHeal: true
syncOptions:
- CreateNamespace=true
For the prod environment, just change the path and namespace.
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: nginx-demo-prod
namespace: argocd
spec:
project: default
source:
repoURL: https://github.com/<your-username>/argocd-example.git
targetRevision: HEAD
path: k8s/overlays/prod
destination:
server: https://kubernetes.default.svc
namespace: prod
syncPolicy:
automated:
prune: true
selfHeal: true
syncOptions:
- CreateNamespace=true
Both use the same Git repo as the source, but since the path is different, the appropriate manifests are applied to each environment. When ArgoCD detects a kustomization.yaml at the specified path, it automatically runs kustomize build and applies the result to the cluster.
Helm — Separating Configurations with Values
If Kustomize is patch-based, Helm is template-based. Manifests are written as Go templates, and variables are injected through values files.
Helm Chart Structure
A basic Helm chart structure looks like this:
helm-chart/
├── Chart.yaml
├── values.yaml
├── values-dev.yaml
├── values-prod.yaml
└── templates/
├── deployment.yaml
└── service.yaml
Chart.yaml holds the chart’s metadata, values.yaml contains default values, and per-environment values files define the overrides for each environment.
Chart.yaml
Define the chart name and version.
# helm-chart/Chart.yaml
apiVersion: v2
name: nginx-demo
description: A simple Nginx demo chart
type: application
version: 0.1.0
appVersion: "1.27"
Writing Templates
Helm templates use Go template syntax. {{ .Values.xxx }} references values from the values files.
# helm-chart/templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ .Release.Name }}-nginx
labels:
app: {{ .Release.Name }}-nginx
spec:
replicas: {{ .Values.replicas }}
selector:
matchLabels:
app: {{ .Release.Name }}-nginx
template:
metadata:
labels:
app: {{ .Release.Name }}-nginx
spec:
containers:
- name: nginx
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
ports:
- containerPort: 80
resources:
requests:
cpu: {{ .Values.resources.requests.cpu }}
memory: {{ .Values.resources.requests.memory }}
limits:
cpu: {{ .Values.resources.limits.cpu }}
memory: {{ .Values.resources.limits.memory }}
# helm-chart/templates/service.yaml
apiVersion: v1
kind: Service
metadata:
name: {{ .Release.Name }}-nginx
spec:
selector:
app: {{ .Release.Name }}-nginx
ports:
- port: 80
targetPort: 80
Separating Values Files
Let’s visualize how multiple values files are merged to produce the final values.
flowchart LR
V1["values.yaml\n(global defaults)\nreplicas: 1\nimage.tag: 1.27\nresources: {..}"] --> MERGE{"Values merge\n(later ones take priority)"}
V2["values-prod.yaml\n(prod overrides)\nreplicas: 3\nresources: {..}"] --> MERGE
MERGE --> FINAL["Final Values\nreplicas: 3 (overridden)\nimage.tag: 1.27 (default)\nresources: {..} (overridden)"]
FINAL --> TMPL["templates/\nhelm template execution"]
TMPL --> YAML["Rendered final manifests"]
Default values go in values.yaml.
# helm-chart/values.yaml
replicas: 1
image:
repository: nginx
tag: "1.27"
resources:
requests:
cpu: 50m
memory: 64Mi
limits:
cpu: 100m
memory: 128Mi
The dev environment uses the defaults as-is, overriding only what’s necessary.
# helm-chart/values-dev.yaml
replicas: 1
image:
tag: "1.27"
The prod environment increases replicas and allocates more resources.
# helm-chart/values-prod.yaml
replicas: 3
image:
tag: "1.27"
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 250m
memory: 256Mi
Since Helm merges values files hierarchically, the per-environment files only need to specify the values that differ — the rest comes from values.yaml defaults.
Registering a Helm App in ArgoCD
When registering a Helm chart in ArgoCD, add Helm-related settings to the source.
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: nginx-demo-helm-dev
namespace: argocd
spec:
project: default
source:
repoURL: https://github.com/<your-username>/argocd-example.git
targetRevision: HEAD
path: helm-chart
helm:
valueFiles:
- values-dev.yaml
destination:
server: https://kubernetes.default.svc
namespace: dev
syncPolicy:
automated:
prune: true
selfHeal: true
syncOptions:
- CreateNamespace=true
The key is specifying the per-environment values file in source.helm.valueFiles. For the prod Application, just switch to values-prod.yaml.
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: nginx-demo-helm-prod
namespace: argocd
spec:
project: default
source:
repoURL: https://github.com/<your-username>/argocd-example.git
targetRevision: HEAD
path: helm-chart
helm:
valueFiles:
- values-prod.yaml
destination:
server: https://kubernetes.default.svc
namespace: prod
syncPolicy:
automated:
prune: true
selfHeal: true
syncOptions:
- CreateNamespace=true
You can also do this via CLI. Use --helm-set to override individual values, or --values to specify a file.
argocd app create nginx-demo-helm-dev \
--repo https://github.com/<your-username>/argocd-example.git \
--path helm-chart \
--values values-dev.yaml \
--dest-server https://kubernetes.default.svc \
--dest-namespace dev
Kustomize vs Helm — When to Use Which
Based on experience with both, here are some guidelines:
Kustomize works well when:
- Manifests are already written as plain YAML
- Differences between environments are small — just replica counts, labels, and resource limits
- You want to handle things with kubectl alone without external dependencies
Helm works well when:
- Complex conditional logic is needed (if/else, range)
- Charts need to be packaged for reuse or distribution
- Dozens of settings need to be managed through a single values file
In practice, it’s common to mix both. You might deploy external Helm charts via ArgoCD while managing internal services with Kustomize. Since ArgoCD supports both, just pick the right tool for each project.
Per-Environment Deployment Strategy
Finally, let me introduce a pattern for efficiently managing per-environment Applications. Using ArgoCD’s ApplicationSet feature, you can create Applications for multiple environments from a single template instead of writing separate Application YAMLs for each.
apiVersion: argoproj.io/v1alpha1
kind: ApplicationSet
metadata:
name: nginx-demo-set
namespace: argocd
spec:
generators:
- list:
elements:
- env: dev
namespace: dev
replicas: "1"
- env: prod
namespace: prod
replicas: "3"
template:
metadata:
name: "nginx-demo-{{env}}"
spec:
project: default
source:
repoURL: https://github.com/<your-username>/argocd-example.git
targetRevision: HEAD
path: "k8s/overlays/{{env}}"
destination:
server: https://kubernetes.default.svc
namespace: "{{namespace}}"
syncPolicy:
automated:
prune: true
selfHeal: true
syncOptions:
- CreateNamespace=true
Define the environment list in generators and reference it with {{env}} in the template. Applying this single ApplicationSet automatically creates two Applications: nginx-demo-dev and nginx-demo-prod. When a new environment is added, just add one more entry to elements — great scalability.
In the next part, we’ll cover ArgoCD’s sync strategies. We’ll look at how to control the timing and order of deployments with settings like Auto Sync, Self Heal, and Sync Wave.
→ Part 5: Sync Strategies — Auto Sync, Self Heal, and Order Control


Loading comments...