Table of contents
- From Configuration to Operations
- App of Apps Pattern
- Monorepo vs Multi-Repo Strategy
- CI and ArgoCD Integration — Image Updater
- Troubleshooting — Common Issues
- Operational Tips
From Configuration to Operations
If you’ve followed the series this far, you’ve covered most of ArgoCD’s core features. Installation, Application creation, sync strategies, multi-cluster, access control. But knowing a tool and using it well are two different stories.
This part covers patterns and problems that repeatedly appear in real production environments. Rather than introducing new features, the focus is on how to combine what you’ve already learned and which choices to make in which situations.
App of Apps Pattern
If there are dozens of Applications to deploy to a cluster, those Application manifests themselves need management. Manually running kubectl apply every time you add an Application doesn’t align with the GitOps philosophy.
The App of Apps pattern creates an “Application that manages Applications.” A single root Application watches a directory containing other Application manifests, and ArgoCD automatically creates and manages the child Applications.
The root Application manages child Applications, and each child Application manages actual Kubernetes resources.
flowchart TB
Root["Root Application\n(watches apps/ directory)"]
ChildA["Child App: monitoring\n(Prometheus + Grafana)"]
ChildB["Child App: ingress-nginx\n(Ingress Controller)"]
ChildC["Child App: backend-api\n(Backend service)"]
ResA1["Deployment: prometheus"]
ResA2["Service: grafana"]
ResB1["Deployment: ingress-nginx"]
ResB2["ConfigMap: nginx-config"]
ResC1["Deployment: api-server"]
ResC2["Service: api-server"]
ResC3["HPA: api-server"]
Root -->|"Auto-create & manage"| ChildA
Root -->|"Auto-create & manage"| ChildB
Root -->|"Auto-create & manage"| ChildC
ChildA --> ResA1
ChildA --> ResA2
ChildB --> ResB1
ChildB --> ResB2
ChildC --> ResC1
ChildC --> ResC2
ChildC --> ResC3
Let’s look at the directory structure first.
infra-repo/
├── root-app.yaml # Only this needs manual apply
└── apps/
├── monitoring.yaml # Prometheus + Grafana
├── cert-manager.yaml # Certificate management
├── ingress-nginx.yaml # Ingress Controller
├── backend-api.yaml # Backend service
└── frontend-web.yaml # Frontend service
The root Application points to the apps/ directory as its source.
# root-app.yaml
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: root-app
namespace: argocd
spec:
project: default
source:
repoURL: https://github.com/my-org/infra-repo.git
targetRevision: main
path: apps
destination:
server: https://kubernetes.default.svc
namespace: argocd
syncPolicy:
automated:
prune: true
selfHeal: true
Each file inside the apps/ directory is an Application definition.
# apps/monitoring.yaml
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: monitoring
namespace: argocd
annotations:
argocd.argoproj.io/sync-wave: "1"
spec:
project: infra
source:
repoURL: https://github.com/my-org/infra-repo.git
targetRevision: main
path: k8s/monitoring
destination:
server: https://kubernetes.default.svc
namespace: monitoring
syncPolicy:
automated:
prune: true
selfHeal: true
syncOptions:
- CreateNamespace=true
The advantage of this structure is clear. The only thing that needs manual apply is root-app.yaml. To add a new service, just commit a YAML file to the apps/ directory. The root Application detects the change and automatically creates the new Application.
There’s a caveat though. If you set prune: true on the root Application, deleting a file from the apps/ directory will cascade-delete the corresponding Application and all resources it manages. This is the intended behavior, but since the blast radius of an accidental file deletion is large, it’s worth considering prune: false for production.
Should you use App of Apps or ApplicationSet? If the structure between Applications is similar and only parameters differ, ApplicationSet is a better fit. If each Application has different configurations or you need fine-grained control, App of Apps is more flexible. Using both together is also possible — placing ApplicationSets under the root Application.
Monorepo vs Multi-Repo Strategy
In GitOps, “which repo structure to use” is a decision with more impact than you might think. There are broadly three strategies.
Strategy 1: App code and manifests in the same repo (monorepo)
my-app/
├── src/ # Application code
├── Dockerfile
└── k8s/ # Kubernetes manifests
├── base/
└── overlays/
├── dev/
├── staging/
└── prod/
Intuitive from a developer’s perspective. You can modify code and manifests in the same PR. However, it’s difficult to distinguish between code changes and manifest changes in the CI pipeline, and ArgoCD may react to code changes causing unnecessary syncs. You can set the path to only watch the k8s/ directory, but the Git history is still mixed.
Strategy 2: Dedicated manifest repo (multi-repo)
# Repo 1: my-app (source code)
my-app/
├── src/
├── Dockerfile
└── ci/
# Repo 2: my-app-deploy (manifests)
my-app-deploy/
├── base/
└── overlays/
├── dev/
├── staging/
└── prod/
Concerns are separated. CI builds images from the source repo, and CD only responds to changes in the deploy repo. Access permissions can also be separated, making it easy to restrict write access to the deploy repo to operators only. The downside is that code changes and manifest changes become separate commits, which can make tracking cumbersome.
Strategy 3: Central infra repo
infra-repo/
├── apps/ # App of Apps manifests
├── k8s/
│ ├── api-server/
│ ├── web-frontend/
│ └── worker/
├── base-infra/
│ ├── monitoring/
│ ├── cert-manager/
│ └── ingress/
└── applicationsets/
This approach consolidates all manifests in a single infra repo. You can see the entire infrastructure state at a glance, and it pairs well with the App of Apps pattern. However, as the repo grows, conflicts between teams can arise, and more granular access control becomes necessary.
There’s no right answer. For small teams, the simplicity of a monorepo is an advantage. For large teams with many services, a multi-repo + central infra repo combination may be easier to manage.
CI and ArgoCD Integration — Image Updater
In GitOps, the boundary between CI and CD is this: CI builds images and pushes them to a registry, then CD reflects the new image tag in Git and applies it to the cluster. The problem lies in the “reflecting the new image tag in Git” step.
It’s common for CI pipelines to directly modify and commit to the Git repo’s manifests, but this requires giving CI write access to Git, and the auto-commits pile up making the history messy.
ArgoCD Image Updater solves this. It periodically checks the container registry, and when a new image tag is found, it automatically updates the ArgoCD Application.
Install Image Updater.
kubectl apply -n argocd \
-f https://raw.githubusercontent.com/argoproj-labs/argocd-image-updater/stable/manifests/install.yaml
Add annotations to the Application to specify which images Image Updater should manage.
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: my-app
namespace: argocd
annotations:
argocd-image-updater.argoproj.io/image-list: myapp=my-org/my-app
argocd-image-updater.argoproj.io/myapp.update-strategy: semver
argocd-image-updater.argoproj.io/myapp.allow-tags: "regexp:^[0-9]+\\.[0-9]+\\.[0-9]+$"
argocd-image-updater.argoproj.io/write-back-method: git
argocd-image-updater.argoproj.io/git-branch: main
spec:
project: default
source:
repoURL: https://github.com/my-org/my-app-deploy.git
targetRevision: main
path: overlays/prod
destination:
server: https://kubernetes.default.svc
namespace: my-app
The update-strategy determines how to select the new image tag.
| Strategy | Behavior |
|---|---|
semver | Select the highest version following Semantic Versioning rules |
latest | Select the most recently built image |
digest | Update if the digest changes even for the same tag |
name | Compare tag names alphabetically and select the largest value |
allow-tags restricts the tag format. The example above only allows tags in 1.2.3 format, so tags like latest or dev-abc123 are ignored.
Setting write-back-method: git makes Image Updater commit the new image tag directly to the Git repo. This approach best aligns with GitOps principles, but requires write access to the repo. The alternative argocd method handles it through ArgoCD’s parameter overrides without touching Git, but the downside is that no record is left in Git.
Troubleshooting — Common Issues
When operating ArgoCD, you’ll encounter some errors repeatedly. Knowing the causes and solutions in advance helps avoid panic.
Sync Failed
Let’s start with the most common causes of sync failure.
Manifest syntax errors are the most basic case. YAML indentation mistakes or incorrect field names are usually the culprit.
argocd app get my-app
This command shows detailed error messages. Most of the time, the cause is contained in the error message returned by the Kubernetes API server.
Insufficient permissions are also common. Either ArgoCD’s ServiceAccount lacks permission to create the resource, or the AppProject doesn’t allow that resource type or namespace.
argocd app sync my-app --dry-run
Testing with dry-run first helps catch issues before actual application.
CRD dependency issues also occur frequently. This happens when trying to deploy a Custom Resource but the corresponding CRD hasn’t been installed yet. Fix this by ordering CRD deployment first with Sync Waves, or by using the SkipDryRunOnMissingResource=true option.
syncOptions:
- SkipDryRunOnMissingResource=true
OutOfSync State That Won’t Resolve
Sometimes an Application stays OutOfSync even after syncing. This is usually because Kubernetes automatically adds default values when creating resources, or controllers modify fields.
A typical example is spec.replicas on Deployments. Since HPA constantly changes the value, from ArgoCD’s perspective it always looks different.
spec:
ignoreDifferences:
- group: apps
kind: Deployment
jsonPointers:
- /spec/replicas
- group: admissionregistration.k8s.io
kind: MutatingWebhookConfiguration
jqPathExpressions:
- ".webhooks[]?.clientConfig.caBundle"
ignoreDifferences excludes specific fields from comparison. Use jsonPointers to specify exact paths, or jqPathExpressions for more complex patterns.
If there are fields to ignore system-wide rather than at the individual Application level, you can configure it in argocd-cm.
apiVersion: v1
kind: ConfigMap
metadata:
name: argocd-cm
namespace: argocd
data:
resource.customizations.ignoreDifferences.admissionregistration.k8s.io_MutatingWebhookConfiguration: |
jqPathExpressions:
- '.webhooks[]?.clientConfig.caBundle'
Degraded State
When an Application shows as Degraded, it means one of the child resources is unhealthy.
argocd app get my-app --show-operation
After identifying which resource has the problem, check the resource’s events and logs.
kubectl describe deployment my-app -n my-app
kubectl logs -l app=my-app -n my-app --tail=50
Common causes include image pull failures (ImagePullBackOff), resource limit exceeded (OOMKilled), and health check failures (CrashLoopBackOff).
ArgoCD has built-in health check logic per resource type, but for CRDs, you can define custom health checks.
apiVersion: v1
kind: ConfigMap
metadata:
name: argocd-cm
namespace: argocd
data:
resource.customizations.health.mycrd.example.com_MyResource: |
hs = {}
if obj.status ~= nil then
if obj.status.phase == "Ready" then
hs.status = "Healthy"
hs.message = "Resource is ready"
elseif obj.status.phase == "Provisioning" then
hs.status = "Progressing"
hs.message = "Resource is being provisioned"
else
hs.status = "Degraded"
hs.message = obj.status.message or "Unknown status"
end
end
return hs
Written in Lua, it maps Healthy/Progressing/Degraded status to ArgoCD based on the CRD’s status field.
Operational Tips
Finally, here’s a collection of tips worth knowing during operations.
Set up notifications. Installing the ArgoCD Notifications controller lets you send sync success/failure, health status change, and other events to Slack, Teams, or email. It’s more systematic and easier to maintain than sending alerts via PostSync Hooks.
apiVersion: v1
kind: ConfigMap
metadata:
name: argocd-notifications-cm
namespace: argocd
data:
service.slack: |
token: $slack-token
trigger.on-sync-succeeded: |
- when: app.status.operationState.phase in ['Succeeded']
send: [app-sync-succeeded]
template.app-sync-succeeded: |
message: |
Application {{.app.metadata.name}} has been successfully synced.
Revision: {{.app.status.sync.revision}}
Set resource limits. You should set appropriate resource requests/limits for the ArgoCD server and repo-server. The repo-server in particular clones Git repos and renders manifests, so it can consume a lot of memory when repos are large or Helm charts are complex.
Customize diffs. Sometimes you want to ignore changes to specific annotations or labels. For example, annotations like kubectl.kubernetes.io/last-applied-configuration often differ without any meaningful significance.
spec:
ignoreDifferences:
- group: "*"
kind: "*"
managedFieldsManagers:
- kube-controller-manager
Regular cleanup. As old Application histories accumulate, ArgoCD’s Redis memory usage grows. You can configure history retention count in argocd-cmd-params-cm.
apiVersion: v1
kind: ConfigMap
metadata:
name: argocd-cmd-params-cm
namespace: argocd
data:
controller.status.processors: "20"
controller.operation.processors: "10"
controller.repo.server.timeout.seconds: "120"
reposerver.parallelism.limit: "0"
This concludes the ArgoCD series. Starting from installation and basic concepts, we covered sync strategies, multi-cluster management, access control, and real-world operational patterns — a full circle. GitOps is not a tool but a culture, and ArgoCD is merely a tool for realizing that culture. What truly matters is that the entire team embraces the principle that “Git is the only source of truth” and continuously refines workflows that align with this principle.


Loading comments...