Table of contents
- Why Separate Configuration
- ConfigMap Basics
- Injecting a ConfigMap into a Pod
- Secret — The Sensitive Data Version of ConfigMap
- Injecting Secrets into a Pod
- Are Secrets Really Secure?
- External Secret Management
- Lab — Using ConfigMap + Secret Together
Why Separate Configuration
When you deploy applications as containers, one question keeps coming up: Where should DB connection strings or API keys go?
Hardcoding them directly in the code is convenient at first. But as values start differing across dev, staging, and production, it quickly becomes a headache. Building separate images per environment is wasteful, and redeploying the whole thing just to change a single value is odd. Most critically, having passwords or other sensitive information baked into image layers is a serious security problem.
To solve this, Kubernetes provides two dedicated resources. ConfigMaps hold ordinary configuration, and Secrets hold sensitive data. Both can be injected into pods as environment variables or files, so you can keep the image unchanged and just swap out the configuration.
flowchart LR
A[Container Image] -->|No changes| D[Pod]
B[ConfigMap<br/>General config] -->|Inject| D
C[Secret<br/>Sensitive data] -->|Inject| D
D -->|Different values per env| E[Dev/Stage/Prod]
Build the image once, swap only the configuration per environment. This is the Kubernetes version of “Store config in the environment” from the 12 Factor App (a collection of 12 principles for designing cloud-native apps).
ConfigMap Basics
A ConfigMap is a configuration store consisting of key-value pairs. Just list string values under the data block in YAML.
apiVersion: v1
kind: ConfigMap
metadata:
name: app-config
namespace: default
data:
LOG_LEVEL: "info"
APP_MODE: "production"
MAX_CONNECTIONS: "100"
application.yaml: |
server:
port: 8080
spring:
datasource:
url: jdbc:mysql://db-service:3306/myapp
Looking at the example above, two usage patterns are mixed. Simple key-value pairs like LOG_LEVEL can be used directly as environment variables, while multi-line strings like application.yaml can hold an entire file as a single entry. When embedding a whole file, use | to declare a block scalar.
You can verify the created ConfigMap with kubectl:
kubectl apply -f app-config.yaml
kubectl get configmap app-config -o yaml
kubectl describe configmap app-config
You can also create one via command line without YAML. Convenient for labs or ad-hoc work:
# Create from literal values
kubectl create configmap app-config \
--from-literal=LOG_LEVEL=info \
--from-literal=APP_MODE=production
# Create from a file
kubectl create configmap app-config \
--from-file=application.yaml
# Create from an entire directory (each file becomes an entry)
kubectl create configmap app-config \
--from-file=./config/
Injecting a ConfigMap into a Pod
A created ConfigMap connects to a pod in two ways: as environment variables, or mounted as a volume.
The difference between the two approaches is illustrated below. The biggest distinction is that environment variables are fixed at pod startup, while volumes allow real-time updates through the filesystem:
flowchart LR
CM["ConfigMap: app-config\ndata:\n LOG_LEVEL: info\n app.yaml: ..."]
subgraph ENV["Method 1: Env Var Injection"]
E_POD["Pod Env Var\nLOG_LEVEL=info"]
E_APP["App: os.getenv()"]
end
subgraph VOL["Method 2: Volume Mount"]
V_FILE["/etc/config/app.yaml\n(Auto-updated on ConfigMap change)"]
V_APP["App: file read + Watch"]
end
CM -->|"Injected at Pod start\n(Restart needed for changes)"| E_POD --> E_APP
CM -->|"Symlink updated\n(Auto-reflected on change)"| V_FILE --> V_APP
Injecting as Environment Variables
The most intuitive approach. Reference the ConfigMap in the container spec’s env or envFrom:
apiVersion: v1
kind: Pod
metadata:
name: app
spec:
containers:
- name: app
image: myapp:1.0
env:
# Pick specific keys to inject
- name: LOG_LEVEL
valueFrom:
configMapKeyRef:
name: app-config
key: LOG_LEVEL
envFrom:
# Inject all keys from the ConfigMap at once
- configMapRef:
name: app-config
env specifies one at a time, while envFrom unpacks all ConfigMap keys as environment variables at once. When there are many keys, envFrom is convenient, but the downside is you can’t tell what environment variables are injected just by looking at the manifest. Choose based on the situation.
Mounting as a Volume
Used when you want to mount an entire configuration file inside the container. Especially suited for file-based configs like Spring Boot’s application.yaml or Nginx’s nginx.conf.
apiVersion: v1
kind: Pod
metadata:
name: app
spec:
containers:
- name: app
image: myapp:1.0
volumeMounts:
- name: config-volume
mountPath: /etc/config
readOnly: true
volumes:
- name: config-volume
configMap:
name: app-config
With this setup, each ConfigMap key appears as a file under /etc/config/. The application.yaml key becomes the file /etc/config/application.yaml.
There’s a hidden advantage to the volume approach: when you modify the ConfigMap, the mounted files are automatically updated. The environment variable approach requires a pod restart for changes to take effect, but volumes update after a short delay. Of course, this is only meaningful if the application can detect file changes and reload.
Secret — The Sensitive Data Version of ConfigMap
On the surface, a Secret looks almost identical to a ConfigMap. The differences are that values are stored base64-encoded and the API server treats them as sensitive data.
apiVersion: v1
kind: Secret
metadata:
name: db-secret
type: Opaque
data:
username: YWRtaW4= # base64-encoded "admin"
password: UEBzc3cwcmQh # base64-encoded "P@ssw0rd!"
base64 is encoding, not encryption. Running echo "YWRtaW4=" | base64 -d gives you the original value. That’s why writing Secrets directly in YAML and committing them to Git is extremely dangerous.
If you want to write in plaintext, use stringData. Kubernetes handles the base64 conversion automatically:
apiVersion: v1
kind: Secret
metadata:
name: db-secret
type: Opaque
stringData:
username: admin
password: "P@ssw0rd!"
Creating via command line automatically handles encoding:
kubectl create secret generic db-secret \
--from-literal=username=admin \
--from-literal=password='P@ssw0rd!'
Secret Types
Secrets have a type field. There are predefined types for different purposes, allowing Kubernetes to validate the value structure:
| Type | Purpose |
|---|---|
Opaque | General key-value (default) |
kubernetes.io/tls | TLS certificate and key (tls.crt, tls.key) |
kubernetes.io/dockerconfigjson | Private image registry credentials |
kubernetes.io/service-account-token | ServiceAccount token (auto-generated) |
For example, when storing a TLS certificate:
kubectl create secret tls my-tls \
--cert=./server.crt \
--key=./server.key
This is the Secret type referenced when using HTTPS with Ingress.
Injecting Secrets into a Pod
The injection methods are nearly identical to ConfigMaps. Use secretRef instead of configMapRef, and secretKeyRef instead of configMapKeyRef:
apiVersion: v1
kind: Pod
metadata:
name: app
spec:
containers:
- name: app
image: myapp:1.0
env:
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: db-secret
key: password
volumeMounts:
- name: secret-volume
mountPath: /etc/secrets
readOnly: true
volumes:
- name: secret-volume
secret:
secretName: db-secret
When mounted as a volume, each key appears as a file. However, Secret volumes are stored by default on tmpfs (in-memory filesystem). Since they’re not written to the node disk, they’re a bit safer.
Are Secrets Really Secure?
The security of Secrets depends on how many layers of security you stack. Let’s look at what threats each layer addresses:
flowchart TB
L0["Default Secret\n(base64 encoding only)"] --> T0["Warning: Exposed on etcd dump\nWarning: Exposed if pushed to Git"]
L0 --> L1["+ etcd encryption\n(EncryptionConfiguration)"]
L1 --> T1["OK: Defends against etcd dump\nWarning: Cluster admin can still access"]
L1 --> L2["+ RBAC least privilege\n(restrict secrets get/list)"]
L2 --> T2["OK: Blocks general user access\nWarning: Anyone who can create Pods can read via env vars"]
L2 --> L3["+ External secret store\n(Vault, AWS SM)"]
L3 --> T3["OK: Centralized audit logs\nOK: Dynamic secrets / key rotation\nOK: Actual values not in Git"]
There’s a point to be honest about here. Default Secrets aren’t as secure as you might think.
- By default, values are stored in plaintext in etcd. You must explicitly enable encryption
- base64 is not encryption
- Anyone with cluster admin privileges can see all Secrets
- Anyone who can create Pods can inject Secrets as environment variables and read them
Kubernetes provides etcd-level encryption through EncryptionConfiguration. By configuring an encryption provider when starting kube-apiserver, encrypted values are stored in etcd:
apiVersion: apiserver.config.k8s.io/v1
kind: EncryptionConfiguration
resources:
- resources:
- secrets
providers:
- aescbc:
keys:
- name: key1
secret: <32-byte-base64-key>
- identity: {}
Here aescbc is the actual encryption provider, and identity is plaintext storage (fallback). The order of providers matters: the top one is used for writes, and reads are attempted top-down.
External Secret Management
etcd encryption is the necessary minimum, but real production often demands more. You might need audit logs for secrets, periodic key rotation, or external systems (e.g., RDS) that reference the same values.
This is where external secret store integration comes in. Several common combinations exist:
- HashiCorp Vault: The most widely used secret manager. Dynamic secrets and lease-based rotation are strengths
- AWS Secrets Manager / Parameter Store: IAM-based access control in AWS environments
- GCP Secret Manager / Azure Key Vault: Managed secret services from each cloud provider
The de facto standard for connecting these to Kubernetes is the External Secrets Operator. You declare an ExternalSecret CRD saying “sync this external secret to a Secret in this namespace,” and the operator periodically queries the external store and creates the Kubernetes Secret:
apiVersion: external-secrets.io/v1beta1
kind: ExternalSecret
metadata:
name: db-credentials
namespace: backend
spec:
refreshInterval: 1h
secretStoreRef:
name: vault-backend
kind: SecretStore
target:
name: db-secret # The Kubernetes Secret will be created with this name
creationPolicy: Owner
data:
- secretKey: password
remoteRef:
key: secret/data/prod/db
property: password
The value of this approach is that you don’t need to put secrets in Git. What goes into Git is just a reference saying “fetch from this path in Vault,” and the actual value only exists in Vault. This solves both GitOps and secret management simultaneously.
flowchart LR
A[Git Repo<br/>ExternalSecret definition] -->|Deploy| B[Kubernetes]
B --> C[External Secrets Operator]
C -->|Query| D[Vault / AWS SM]
D -->|Return value| C
C -->|Create Secret| E[Inject into Pod]
Lab — Using ConfigMap + Secret Together
Let’s tie everything we’ve learned together. We’ll inject a custom config into Nginx via ConfigMap and inject a Basic Auth password via Secret.
apiVersion: v1
kind: ConfigMap
metadata:
name: nginx-config
data:
default.conf: |
server {
listen 80;
location / {
auth_basic "Restricted";
auth_basic_user_file /etc/nginx/auth/htpasswd;
return 200 "Hello from configured nginx\n";
}
}
---
apiVersion: v1
kind: Secret
metadata:
name: nginx-auth
type: Opaque
stringData:
# Value generated with: htpasswd -nb admin secret
htpasswd: "admin:$apr1$abc123$xyz..."
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.27
ports:
- containerPort: 80
volumeMounts:
- name: config
mountPath: /etc/nginx/conf.d
- name: auth
mountPath: /etc/nginx/auth
readOnly: true
volumes:
- name: config
configMap:
name: nginx-config
- name: auth
secret:
secretName: nginx-auth
Apply and verify:
kubectl apply -f nginx-demo.yaml
kubectl port-forward deploy/nginx 8080:80
# In another terminal
curl -u admin:secret http://localhost:8080/
The key point is that configuration and credentials are completely separated, so the image uses the official nginx:1.27 as-is. When the environment changes, the image stays the same — you just swap the ConfigMap and Secret.
In the next part, we’ll cover the storage system that keeps data alive even when pods restart. We’ll look at how PVs and PVCs connect and how dynamic provisioning works with StorageClass.




Loading comments...