Skip to content
ioob.dev
Go back

Kubernetes Beginner Series 3 — Pod

· 7 min read
Kubernetes Series (3/12)
  1. Kubernetes Beginner Series 1 — What Is Kubernetes
  2. Kubernetes Beginner Series 2 — Cluster Architecture
  3. Kubernetes Beginner Series 3 — Pod
  4. Kubernetes Beginner Series 4 — Controllers
  5. Kubernetes Beginner Series 5 — Services and Networking
  6. Kubernetes Beginner Series 6 — Ingress and Gateway API
  7. Kubernetes Beginner Series 7 — ConfigMap and Secret
  8. Kubernetes Beginner Series 8 — Storage: PV, PVC, StorageClass
  9. Kubernetes Beginner Series 9 — Resource Management and Autoscaling
  10. Kubernetes Beginner Series 10 — RBAC and Security: The Principle of Least Privilege
  11. Kubernetes Beginner Series 11 — Observability: Logs, Metrics, and Traces
  12. Kubernetes Beginner Series 12 — Helm and Package Management
Table of contents

Table of contents

Why Pods, of All Things

When you first touch Kubernetes, you start with kubectl run. You see a message like pod/nginx created. You were trying to start a container — so why does it say “pod”?

In Kubernetes, the unit of deployment is not the container but the Pod. A pod is a unit wrapping one or more containers, and containers within the same pod share networking and storage. In other words, containers belonging to the same pod use the same IP and can reach each other via localhost.

Why add this extra layer? At first, it seems like containers alone would suffice as the deployment unit. But in real operations, you frequently encounter container pairs that must travel closely together, like “main process + log collector” or “main process + TLS proxy.” They need to share the same network and storage, and it’s natural for them to be bundled as a deployment unit. Pods are the mechanism for grouping “inseparable pairs.”

Inside a Pod

There’s an interesting supporting character inside a pod: the Pause container. Before the app containers declared by the user come up, a very small container called Pause starts first. Its role is to hold on to the network namespace. The reason network information persists even when app containers restart is thanks to Pause.

flowchart TB
    subgraph POD["Pod (IP: 10.244.1.5)"]
        PAUSE[Pause Container<br/>Network Namespace Holder]
        APP[App Container]
        SIDE[Sidecar Container]
        VOL[(Shared Volume)]
        APP -.-|localhost| SIDE
        APP --> VOL
        SIDE --> VOL
    end

Thanks to this structure, containers within a pod share:

Each pod gets a unique IP, but when a pod dies, that IP is gone. A newly created pod receives a different IP. This instability is why we abstract pods behind Services instead of pointing to them directly (covered in Part 5).

Running the Simplest Pod

Let’s start by creating a YAML for a pod with a single nginx container.

# nginx-pod.yaml
apiVersion: v1
kind: Pod
metadata:
  name: nginx
  labels:
    app: nginx
spec:
  containers:
  - name: nginx
    image: nginx:1.25
    ports:
    - containerPort: 80

Apply this file to the cluster and check the status:

kubectl apply -f nginx-pod.yaml
kubectl get pod nginx -o wide

When the STATUS changes to Running, the container is up and running. You can also exec into the pod and call itself with curl:

kubectl exec -it nginx -- curl localhost:80

If you see the “Welcome to nginx” page, it’s a success. However, in practice, directly creating pods like this is rare. A standalone pod has no self-healing — if it dies, it simply disappears. That’s why we use controllers like Deployment, covered in Part 4, to manage pods.

Pod Lifecycle

Knowing the stages a pod goes through from birth to death is a huge help when debugging.

stateDiagram-v2
    [*] --> Pending
    Pending --> Running: Container started
    Pending --> Failed: Image pull failure
    Running --> Succeeded: Normal termination (RestartPolicy=Never)
    Running --> Failed: Abnormal termination (cannot restart)
    Running --> Running: Container restart
    Running --> [*]: Deleted
    Succeeded --> [*]: Deleted
    Failed --> [*]: Deleted

Running kubectl describe pod <name> shows lifecycle transitions in chronological order in the Events section. Familiar error messages like “ImagePullBackOff” and “CrashLoopBackOff” appear here.

Health Checks: Liveness, Readiness, Startup

Just because a pod is Running doesn’t necessarily mean it’s working correctly. The container might be alive but the application could be stuck in a deadlock, or it might still be booting and not ready to receive traffic. Kubernetes uses three probes to distinguish these states.

Let’s look at the flow of when each probe acts and what decisions it makes:

flowchart TB
    START["Container Start"] --> STP{"startupProbe exists?"}
    STP -->|Yes| SLOOP{"startupProbe passed?"}
    SLOOP -->|"Failed (below failureThreshold)"| SLOOP
    SLOOP -->|Passed| READY["liveness/readiness activated"]
    SLOOP -->|"Threshold exceeded"| KILL["Restart container"]
    STP -->|No| READY
    READY --> LP{"livenessProbe failed?"}
    LP -->|Yes| KILL
    LP -->|No| RP{"readinessProbe failed?"}
    RP -->|Yes| REMOVE["Remove from Service Endpoints"]
    RP -->|No| SERVE["Normal traffic reception"]
apiVersion: v1
kind: Pod
metadata:
  name: web
spec:
  containers:
  - name: web
    image: myapp:1.0
    ports:
    - containerPort: 8080
    livenessProbe:
      httpGet:
        path: /healthz
        port: 8080
      initialDelaySeconds: 30
      periodSeconds: 10
    readinessProbe:
      httpGet:
        path: /ready
        port: 8080
      periodSeconds: 5
    startupProbe:
      httpGet:
        path: /healthz
        port: 8080
      failureThreshold: 30
      periodSeconds: 10

Here’s what each one does:

If readiness isn’t configured properly, a newly started pod receives traffic before it’s ready and starts throwing 500 errors. This is a common cause of deployment failures.

Multi-Container Pattern: Sidecar

Putting multiple containers in a single pod is for special cases. The most common pattern is the Sidecar. It’s a structure where a helper container is attached alongside the main container.

Common sidecar use cases include:

Let’s look at a sidecar example that shares file-based logs:

apiVersion: v1
kind: Pod
metadata:
  name: app-with-logger
spec:
  volumes:
  - name: logs
    emptyDir: {}
  containers:
  - name: app
    image: myapp:1.0
    volumeMounts:
    - name: logs
      mountPath: /var/log/app
  - name: log-forwarder
    image: fluent/fluent-bit:latest
    volumeMounts:
    - name: logs
      mountPath: /var/log/app
      readOnly: true

Both containers mount the emptyDir volume together. The main app writes logs to /var/log/app, and the log-forwarding sidecar reads from the same path and ships them externally. Since they’re in the same pod, communication happens purely through the filesystem with no network hops.

Init Container: Containers That Run First and Finish

When there are initialization tasks that must complete before the main container starts, use Init Containers. Init containers run sequentially, and only after all of them succeed does the main container start.

apiVersion: v1
kind: Pod
metadata:
  name: app-with-init
spec:
  initContainers:
  - name: wait-for-db
    image: busybox:1.36
    command: ['sh', '-c', 'until nc -z db 5432; do echo "waiting db"; sleep 2; done']
  - name: run-migration
    image: myapp:1.0
    command: ['./migrate.sh']
  containers:
  - name: app
    image: myapp:1.0

This pod operates in the following order:

  1. wait-for-db waits until the DB becomes accessible
  2. run-migration runs the DB schema migration
  3. The main container app starts

Visualized on a timeline, it looks like this. The key is sequential execution where each stage must succeed before moving to the next:

sequenceDiagram
    participant K as kubelet
    participant I1 as initContainer:wait-for-db
    participant I2 as initContainer:run-migration
    participant M as container:app

    K->>I1: Run
    I1->>I1: Poll DB connection
    I1-->>K: exit 0 (success)
    K->>I2: Run
    I2->>I2: Migration
    I2-->>K: exit 0 (success)
    K->>M: Run (main)
    Note over M: Pod enters Running state

You could put this logic inside the main app instead of using init containers. But separating it makes responsibilities clear and keeps images small. Most importantly, if the migration fails, the main app never starts — preventing the accident of serving traffic in an invalid state.

Resource Requests and Limits

When running a pod, it’s good practice to specify “how much CPU this pod uses and what the maximum memory should be.” This is configured with resources.requests and resources.limits.

spec:
  containers:
  - name: app
    image: myapp:1.0
    resources:
      requests:
        cpu: "200m"      # 0.2 CPU cores
        memory: "256Mi"
      limits:
        cpu: "1"
        memory: "512Mi"

requests is the value the Scheduler references when placing a pod. It sets the criterion: “this node must have at least 200m of free CPU for this pod to fit.” limits is the runtime ceiling. If the memory limit is exceeded, the container gets OOMKilled.

This setting is the most common source of production incidents. If requests are too large, resources are wasted; too small, and pods contend with each other. If limits are too tight, the app falls into a restart loop from OOMKilled. You need to measure actual load and adjust accordingly.

Pods Are Disposable

To close this part, here’s the most important perspective: pods are ephemeral. When they die and come back, they’re different pods with different IPs, and local disk contents are gone.

This characteristic isn’t a flaw — it’s intentional design. The premise that pods can easily die and easily be reborn is what makes self-healing, horizontal scaling, and rolling updates possible. If your application can’t accept this premise (e.g., it stores important state on local disk), you won’t be able to fully benefit from Kubernetes.

That’s why the Kubernetes-friendly approach is to push state to external systems (databases, object storage, persistent volumes) and design pods as stateless processing units.


In the next part, we’ll look at how to manage multiple pods declaratively through controllers like Deployments, rather than managing pods directly. We’ll also cover how rolling updates and rollbacks work safely.

-> Part 4: Controllers


Related Posts

Share this post on:

Comments

Loading comments...


Previous Post
Kubernetes Beginner Series 2 — Cluster Architecture
Next Post
Kubernetes Beginner Series 4 — Controllers