Skip to content
ioob.dev
Go back

Docker for Beginners Part 6 — Networking

· 8 min read
Docker Series (6/13)
  1. Docker for Beginners Part 1 — What Is Docker
  2. Docker for Beginners Part 2 — Images and Layers
  3. Docker for Beginners Part 3 — Writing a Dockerfile
  4. Docker for Beginners Part 4 — Container Lifecycle
  5. Docker for Beginners Part 5 — Volumes and Data Persistence
  6. Docker for Beginners Part 6 — Networking
  7. Docker Part 7 — Multi-Container Orchestration with Docker Compose
  8. Docker Part 8 — Slimming Images with Multi-Stage Builds
  9. Docker Part 9 — Registry: Where Do Images Live?
  10. Docker Part 10 — Container Security: Blocking Issues Before They Blow Up
  11. Docker Part 11 — BuildKit and Advanced Builds
  12. Docker Part 12 — Production Best Practices
  13. Docker Part 13 — Troubleshooting and Alternatives
Table of contents

Table of contents

How Do Containers Talk to Each Other?

When you run only a single container, you rarely need to think about networking. A single docker run -p 8080:80 nginx line and you can access it at port 8080 on the host. But once you have two, three, or more containers — with the DB, app, and cache each in separate containers — questions suddenly flood in:

This part answers all of these questions. Docker networking is more structured than you might think, and once you grasp the structure, most networking problems can be solved on a single map.

The Big Picture of Docker Networking

Let’s look at the overall structure in a single diagram:

flowchart TB
    subgraph HOST["Docker Host"]
        subgraph BR["bridge network (docker0)"]
            C1["Container A<br/>172.17.0.2"]
            C2["Container B<br/>172.17.0.3"]
        end
        subgraph USER["user-defined bridge (my-net)"]
            C3["Container C<br/>172.18.0.2"]
            C4["Container D<br/>172.18.0.3"]
        end
        HOSTNIC["eth0 (host NIC)<br/>192.168.1.10"]
        IPT["iptables NAT/FILTER"]
    end
    EXT["External network"]

    BR <--> IPT
    USER <--> IPT
    IPT <--> HOSTNIC
    HOSTNIC <--> EXT

Docker creates virtual network interfaces and Linux bridges on the host, assigns each container its own network namespace, and connects them using virtual NIC pairs (veth pairs). iptables rules handle NAT and port forwarding. From the outside it looks like magic, but inside it is standard Linux networking techniques.

Four Network Drivers

Docker abstracts networking through drivers. Let’s look at the four commonly used ones:

DriverScopeUse Case
bridgeSingle hostCommunication between containers on the same host (default)
hostSingle hostContainer uses the host’s network directly
overlayMulti-hostCommunication across containers on multiple nodes in Swarm
noneSingle hostNo network (for isolation or custom configuration)

Let’s go through each one and understand why they are designed this way.

bridge — The Most Commonly Used Default Driver

When Docker is installed, three networks are created by default: bridge, host, and none. If nothing is specified in docker run, the bridge network (default name: bridge) is used.

docker network ls
# NETWORK ID     NAME      DRIVER    SCOPE
# 8f1e2d3c4b5a   bridge    bridge    local
# 7c2d3e4f5a6b   host      host      local
# 6b3c4d5e6f7a   none      null      local

Containers on the default bridge can communicate with each other by IP. However, IPs can change on every restart. The default bridge does not provide DNS-based name resolution. This is the critical weakness of the default bridge.

# Launch on the default bridge
docker run -d --name db postgres:16
docker run -it --rm --link db alpine sh
# ping db   (works because --link adds it to /etc/hosts)

The --link option is legacy. The modern approach is to use a user-defined bridge and communicate directly via container names.

This is a bridge network you create yourself. Docker provides automatic DNS, so containers within the same network can find each other by container name.

# Create a new network
docker network create app-net

# Launch two containers on the same network
docker run -d --name db --network app-net \
  -e POSTGRES_PASSWORD=secret \
  postgres:16

docker run -d --name api --network app-net \
  -e DB_HOST=db \
  -e DB_USER=postgres \
  -e DB_PASSWORD=secret \
  myapp:1.0

The key is -e DB_HOST=db. The API container can connect to the DB using the hostname db. Docker’s built-in DNS resolves container names to their current IPs. Even if the DB restarts and gets a new IP, the name stays the same, so you do not need to change app code.

User-defined bridges offer several benefits:

# Add/remove a container from a network
docker network connect app-net cache
docker network disconnect app-net cache

When grouping multiple containers in practice, use a user-defined bridge instead of the default bridge. Docker Compose automatically creates a user-defined bridge per project, so using Compose naturally sidesteps this issue.

host — Sharing the Network with the Host

--network host makes the container share the host’s network without creating a separate network namespace. The container’s eth0 is the host’s eth0 directly.

docker run --rm --network host nginx:1.27
# Directly occupies the host's port 80

The advantage is speed. With no NAT overhead, network I/O latency is eliminated. Useful for UDP streaming, high-performance proxies, and latency-sensitive services.

The downsides are also significant:

Unless there is a specific reason, bridge + -p is better than host. Only use it when performance is truly critical or when direct binding to the host network is required.

none — Turning Off Networking Entirely

--network none gives the container no network at all. Only the loopback (lo) exists, and external communication is impossible.

docker run --rm --network none alpine ip addr
# 1: lo: <LOOPBACK> is the only thing shown

Use cases are narrow but useful in two situations:

  1. Strong isolation for batch jobs that need no external communication
  2. Custom networking scenarios where you configure the network yourself (CNI plugins, etc.)

Rarely used in general development. Just know it exists and move on.

overlay — Networking Across Multiple Hosts

The overlay driver enables containers across multiple Docker hosts to communicate as if they were on the same network. Primarily used in Docker Swarm.

flowchart LR
    subgraph H1["Docker Host 1"]
        C1["Container A"]
        VX1["VXLAN"]
    end
    subgraph H2["Docker Host 2"]
        C2["Container B"]
        VX2["VXLAN"]
    end

    C1 --> VX1
    VX1 <-->|UDP 4789<br/>VXLAN tunnel| VX2
    VX2 --> C2

Internally it uses VXLAN to create L2 tunnels between hosts. Containers A and B appear to be on the same subnet even though they are on different hosts.

If you are not using Swarm, you will rarely work with overlay directly. Kubernetes uses CNI plugins (Flannel, Calico, Cilium, etc.) instead of Docker overlay to handle the same task more flexibly. In this series, we will just establish the context that “this driver exists, and Swarm/CNI build on it.”

Port Forwarding — What -p Actually Does

What does -p 8080:80 do? It forwards traffic arriving at host port 8080 to container port 80. Docker implements this with iptables NAT rules.

docker run -d --name web -p 8080:80 nginx:1.27

# Looking at the actual iptables NAT table on a Linux host
sudo iptables -t nat -L DOCKER -n
# Chain DOCKER (2 references)
# target     prot opt source    destination
# DNAT       tcp  --  anywhere  anywhere  tcp dpt:8080 to:172.17.0.2:80

A single DNAT rule redirects the traffic. Here is exactly what happens:

  1. An external TCP request arrives at host 0.0.0.0:8080
  2. iptables rewrites the destination to the container IP’s port 80 (Destination NAT)
  3. The packet is delivered to the container through the bridge network
  4. SNAT is applied on the response path in the reverse direction

The -p option has several formats:

# host port : container port
-p 8080:80

# Bind to interface — not accessible externally, only locally
-p 127.0.0.1:8080:80

# Random host port — Docker picks an available port
-p 80

# UDP port
-p 53:53/udp

# Multiple ports
-p 8080:80 -p 8443:443

Binding to 127.0.0.1 is important for security. If you use -p 5432:5432 without any configuration, the DB is exposed to the entire world. If it is only for local use, narrow the scope with -p 127.0.0.1:5432:5432.

Default Bridge vs User-Defined Bridge — Side by Side

Here is a summary of the differences between the two bridges, ready for practical use:

PropertyDefault bridgeUser-defined bridge
CreationAutomatic at Docker installExplicit via docker network create
DNSNone (IP-only communication)Yes (container name resolution)
IsolationAll containers on the same networkSeparated per network
Connect/disconnectRequires recreationRuntime connect/disconnect
--linkSupported (legacy)Not needed
Recommended?Legacy/testing onlyRecommended for production

Docker Compose Networking

Compose automatically creates a user-defined bridge for each project. Consider the following docker-compose.yml:

services:
  api:
    image: myapp:1.0
    ports:
      - "8080:3000"
    environment:
      DB_HOST: db
      REDIS_HOST: cache
    depends_on:
      - db
      - cache

  db:
    image: postgres:16
    environment:
      POSTGRES_PASSWORD: secret
    volumes:
      - pgdata:/var/lib/postgresql/data

  cache:
    image: redis:7

volumes:
  pgdata:

When you run docker compose up -d, Compose does the following:

  1. Creates a user-defined bridge named <project-name>_default
  2. Attaches all three services to this network
  3. Each service name becomes a DNS hostname (db, cache, api)
  4. Only api:8080 is forwarded to a host port. The DB and Redis have no external exposure

As a result, the API communicates internally with DB_HOST=db and REDIS_HOST=cache, while externally only the API’s port 8080 is accessible. This is the most common networking pattern in practice.

If you want to separate networks further, you can declare them in Compose:

services:
  api:
    networks: [frontend, backend]
  db:
    networks: [backend]
  nginx:
    networks: [frontend]

networks:
  frontend:
  backend:
    internal: true   # Network with no external access

A backend network with internal: true is isolated from the host. Place sensitive services like the DB inside it, and only expose external-facing services (nginx) on the frontend — enabling layered security.

DNS and Service Discovery

The DNS in user-defined bridges resolves container names by default. You can also assign aliases with --network-alias:

docker run -d --name primary-db --network app-net \
  --network-alias db \
  postgres:16
# Also accessible via the name "db"

If multiple containers share the same alias, it behaves like round-robin DNS:

docker run -d --name api1 --network app-net --network-alias api nginx
docker run -d --name api2 --network app-net --network-alias api nginx
# Queries to "api" from another container return both IPs in alternation

However, this is only DNS-level distribution, not true load balancing. For real load balancing, you need an nginx/haproxy in front, or Swarm/Kubernetes Services.

IPv6 and Other Options

When creating a network, you can specify subnet, gateway, and driver options:

docker network create \
  --driver bridge \
  --subnet 10.20.0.0/16 \
  --gateway 10.20.0.1 \
  --ip-range 10.20.10.0/24 \
  custom-net

IPv6 can also be enabled:

docker network create --ipv6 --subnet 2001:db8::/64 v6-net

For most projects, the default subnets Docker assigns are sufficient. Only adjust when subnet collisions occur (e.g., the company internal network and Docker’s default network use the same address range, causing routing confusion).

Debugging — When Traffic Is Not Getting Through

Here is the most commonly used network debugging procedure in practice:

# Container's network information
docker inspect --format '{{json .NetworkSettings.Networks}}' web | jq

# List of containers on a network
docker network inspect app-net --format '{{range .Containers}}{{.Name}} {{.IPv4Address}}\n{{end}}'

# DNS check from inside a container
docker exec -it api nslookup db
docker exec -it api getent hosts db

# Check port listening
docker exec -it api netstat -tlnp
docker exec -it api ss -tlnp   # Preferred these days

# Check port forwarding from host
ss -tlnp | grep 8080
sudo iptables -t nat -L DOCKER -n --line-numbers

If ping resolves the name but TCP connection fails, the container is not listening on that port. If DNS itself fails, suspect network membership or the driver type. Name resolution not working on the default bridge is “not a bug — it’s expected behavior.”

Moving Past the Fundamentals

From Part 1 on VMs vs containers, Part 2 on images and layers, Part 3 on Dockerfiles, Part 4 on lifecycle, Part 5 on data, and now networking — these six parts cover Docker’s core concepts. In one sentence: Docker runs isolated processes made with Linux kernel features, in the reproducible form of images, and connects them to the outside world with volumes and networks.

Starting from Part 7, we enter hands-on operational topics. Docker Compose for managing multiple containers, image optimization and registries, security, BuildKit, and production best practices.


In the next part, we cover how to bundle and operate multiple containers together using Docker Compose.

Part 7: Docker Compose


Related Posts

Share this post on:

Comments

Loading comments...


Previous Post
Docker for Beginners Part 5 — Volumes and Data Persistence
Next Post
Docker Part 7 — Multi-Container Orchestration with Docker Compose