Table of contents
- When LoadBalancer Services Fall Short
- The Two Parts of Ingress
- Installing nginx Ingress Controller
- Basic Routing: Path and Host
- TLS Termination: Handling Certificates at Ingress
- Limitations of Ingress
- Gateway API: The Successor to Ingress
- Ingress vs Gateway API — When to Use Which?
When LoadBalancer Services Fall Short
In Part 5, we mentioned that you can expose services externally using the LoadBalancer type. But when you actually run a service in production, you quickly hit the limits of this approach.
Consider what happens as services multiply. Frontend, backend API, admin panel, image server. If you expose each one as a separate LoadBalancer Service, a cloud LB (Load Balancer — a device that distributes incoming traffic to multiple servers) gets created for each service. Each LB costs a fair amount, and you have to manage domains and TLS (Transport Layer Security — the protocol that encrypts communication behind HTTPS; the successor to SSL) certificates separately for each one.
On top of that, there’s almost always a need to route by path — like “/api goes to the backend, /static goes to the image server” — or by hostname — like “api.example.com is the backend, admin.example.com is the admin server.” This can’t be solved with L4 LB (OSI Layer 4 — only sees TCP/UDP ports for routing). You need L7 (OSI Layer 7 — application level, interprets HTTP headers and paths) routing that inspects HTTP headers.
Kubernetes’ Ingress is the abstraction that solves this problem. A single entry point routes to multiple services, terminates TLS, and applies path rules.
The Two Parts of Ingress
To understand Ingress, you first need to know that two separate parts work together:
- Ingress Resource: YAML that declares routing rules like “send this domain’s this path to that Service”
- Ingress Controller: The reverse proxy that actually interprets and applies those rules. Various implementations exist including nginx, Traefik, HAProxy, etc.
flowchart TB
EXT[External User] -->|HTTP/HTTPS| LB[Cloud LB]
LB --> IC[Ingress Controller Pod<br/>nginx / traefik]
IC --> S1[Service A]
IC --> S2[Service B]
IC --> S3[Service C]
S1 --> P1[Pods A]
S2 --> P2[Pods B]
S3 --> P3[Pods C]
IR[(Ingress Resource<br/>Routing Rules)] -.->|watch| IC
Creating an Ingress resource alone won’t process traffic. You must install an Ingress Controller first for the resource to actually work. This is the most confusing point when studying for the first time.
Here’s a comparison of major Ingress Controllers:
- nginx Ingress Controller: Community reference. Extensive configuration options and plenty of documentation
- Traefik: Concise configuration and dynamic reload are strengths. Built-in dashboard
- HAProxy Ingress: Strong in performance tuning
- AWS ALB Controller, GCE Ingress: Uses cloud LBs directly as Ingress
Regardless of which one you choose, the Ingress resource YAML looks the same. This is because the standard makes controllers interchangeable.
Installing nginx Ingress Controller
Here’s an example of installing the nginx Ingress Controller with Helm for lab purposes:
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm repo update
helm install ingress-nginx ingress-nginx/ingress-nginx \
--namespace ingress-nginx \
--create-namespace
After installation, a controller pod appears in the ingress-nginx namespace, and a LoadBalancer type Service is automatically created in front of it. In a cloud environment, an external IP gets assigned.
kubectl get svc -n ingress-nginx
# NAME TYPE EXTERNAL-IP
# ingress-nginx-controller LoadBalancer 35.xxx.xxx.xxx
Once you point DNS to this external IP, you can manage all domain-based routing purely through Ingress resources from that point on. The only expensive cloud LB you need is the one in front of the Ingress Controller.
Basic Routing: Path and Host
Let’s start with a simple Ingress resource. The rule sends /api to the backend and / to the frontend:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: web
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
ingressClassName: nginx
rules:
- host: example.com
http:
paths:
- path: /api
pathType: Prefix
backend:
service:
name: backend
port:
number: 80
- path: /
pathType: Prefix
backend:
service:
name: frontend
port:
number: 80
Requests to example.com are routed by path. /api/... goes to the backend Service, and everything else goes to the frontend Service. The ingressClassName: nginx determines which controller handles this rule when multiple Ingress Controllers coexist.
Host-based routing uses the same structure:
spec:
rules:
- host: api.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: backend
port:
number: 80
- host: admin.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: admin
port:
number: 80
Point api.example.com and admin.example.com to the same IP via DNS, and the Ingress Controller looks at the host header to route to the respective Service. Serving multiple domains with a single LB becomes this simple.
TLS Termination: Handling Certificates at Ingress
The most tedious part of adopting HTTPS is certificate management. Ingress natively supports TLS termination. This means requests arriving via HTTPS from the outside are decrypted at the Ingress Controller, and forwarded to internal Services over HTTP.
First, store the certificate as a Secret:
kubectl create secret tls example-tls \
--cert=tls.crt \
--key=tls.key
Then reference this Secret in the Ingress:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: web
spec:
ingressClassName: nginx
tls:
- hosts:
- example.com
secretName: example-tls
rules:
- host: example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: frontend
port:
number: 80
Now requests to https://example.com are automatically decrypted and processed. Since manually issuing and renewing certificates is tedious, most teams use cert-manager alongside Ingress. cert-manager automatically obtains certificates from Let’s Encrypt or a private CA, stores them in Secrets, and auto-renews before expiration. A single Ingress annotation is all it takes for “automatically issue a certificate for this domain.”
Limitations of Ingress
Ingress is a great abstraction, but over time several limitations have emerged.
First, the spec is too heavily skewed toward HTTP/HTTPS. gRPC, TCP, and UDP are difficult to handle properly with the standard Ingress spec. You end up relying on controller-specific annotations.
Second, the dependency on annotations is too high. Advanced features like rate limiting, authentication, and canary deployments are almost entirely implemented via annotations. Since annotation systems differ between controllers, portability suffers. You can’t directly migrate nginx settings to Traefik.
Third, role separation is difficult. The infra team wants to manage the LB and certificates while app teams only manage routing rules, but everything is mixed into a single Ingress resource, making clean permission separation hard.
Gateway API emerged to solve these limitations.
Gateway API: The Successor to Ingress
Gateway API is the next-generation networking API led by Kubernetes SIG-Network. It was designed from scratch to structurally solve problems that Ingress couldn’t.
The core change is role separation. Resources are split into three, allowing different owners to manage each:
flowchart TB
GC[GatewayClass<br/>What infra is it?<br/>e.g. ingress-nginx] -.-|Infra team| ADMIN[Platform Admin]
GW[Gateway<br/>Entry point<br/>e.g. api.example.com:443] -.-|Cluster operator| OPS[Cluster Operator]
HR[HTTPRoute<br/>Routing rules<br/>e.g. /users -> user-svc] -.-|App developer| DEV[App Developer]
GC --> GW
GW --> HR
- GatewayClass: Defines which implementation handles this Gateway. Implementations like nginx, Envoy, Kong
- Gateway: The actual entry point configuration. Listeners (ports, protocols), TLS certificates
- HTTPRoute / TCPRoute / GRPCRoute: Routing rules. Attach to a Gateway and define traffic handling
Let’s look at a simple HTTPRoute example:
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: web
spec:
parentRefs:
- name: my-gateway
hostnames:
- example.com
rules:
- matches:
- path:
type: PathPrefix
value: /api
backendRefs:
- name: backend
port: 80
- matches:
- path:
type: PathPrefix
value: /
backendRefs:
- name: frontend
port: 80
It looks similar to Ingress but there are differences. parentRefs explicitly specifies which Gateway to attach to. Multiple HTTPRoutes from different namespaces can attach to a single Gateway, making team responsibility separation natural. The infra team sets up one Gateway, and each app team manages just their HTTPRoutes in their own namespace.
Another difference is the ability to express advanced features through standard fields without annotations. Header modification, request redirects, and weight-based traffic splitting (canary/blue-green) are all part of the spec.
# Weight-based canary deployment example
rules:
- backendRefs:
- name: app-v1
port: 80
weight: 90
- name: app-v2
port: 80
weight: 10
The fact that this is part of the standard alone is an escape from the “per-controller annotation hell” of the Ingress era.
Let’s visualize how weight-based canary actually distributes traffic. Out of 100 requests, how many go to each side is determined by the weight value:
flowchart LR
CLIENT["Client Requests: 100"] --> GW["Gateway (api.example.com)"]
GW --> HR["HTTPRoute"]
HR -->|"weight: 90"| V1["Service: app-v1\n-> 90 requests"]
HR -->|"weight: 10"| V2["Service: app-v2 (canary)\n-> 10 requests"]
Ingress vs Gateway API — When to Use Which?
Let’s summarize the differences in a table:
| Aspect | Ingress | Gateway API |
|---|---|---|
| Stability | GA, widely used | v1 GA (2023) |
| Role separation | Everything in one resource | GatewayClass / Gateway / Route separated |
| Protocols | Primarily HTTP/HTTPS | HTTP, TCP, UDP, gRPC, TLS |
| Advanced features | Annotation-dependent | Standard fields |
| Ecosystem maturity | Rich tools and examples | Rapidly growing |
As of 2026, most projects are still using Ingress. There’s no need to abandon Ingress immediately, but for new projects or environments requiring team-level separation, Gateway API is worth considering. Major controllers (nginx, Traefik, Istio, Envoy Gateway) all provide Gateway API implementations, so there are plenty of choices.
So far we’ve surveyed Kubernetes’ core resources and networking. You should now have a picture of where containers live and how external traffic reaches them.
Starting from the next part, we move into operational topics. Part 7 covers ConfigMaps and Secrets — separating application configuration from code.




Loading comments...