Table of contents
- What Is BuildKit?
- Verifying Activation
- Cache Mounts — A Revolution in Dependency Install Speed
- Build Secrets — Keeping Them Out of the Image
- Inline Cache and Registry Cache
- buildx — Multi-Architecture Builds
- Build Arguments and Environment Branching
- Build Provenance
- Practical Template — All-in-One
- Where We Stand
What Is BuildKit?
BuildKit is an independent build engine separated from Docker. Developed under the moby/buildkit project, it analyzes Dockerfiles as a DAG (Directed Acyclic Graph) and concurrently builds stages that can run in parallel. The legacy builder could only run RUN commands sequentially, but BuildKit processes independent stages simultaneously.
flowchart LR
subgraph LEGACY["Legacy builder"]
L1["RUN apt install"] --> L2["RUN npm install"] --> L3["RUN build"]
end
subgraph BUILDKIT["BuildKit"]
B0["Analysis: DAG"]
B0 --> BA["stage: deps"]
B0 --> BB["stage: assets"]
BA --> BC["stage: compile"]
BB --> BC
BC --> BD["stage: final"]
end
In multi-stage builds, when there are multiple stages that do not depend on each other, BuildKit runs them simultaneously. This is why build times noticeably decrease.
Verifying Activation
If you are on a recent Docker version, BuildKit is the default with no action needed. For older versions, enable it via environment variable:
export DOCKER_BUILDKIT=1
docker build -t myapp:1.4.2 .
A quick check: if the build output starts with [+] Building ..., BuildKit is active. The legacy builder starts with Sending build context ....
Adding a syntax directive at the top of the Dockerfile ensures you can safely use the latest Dockerfile features:
# syntax=docker/dockerfile:1.7
# syntax=... tells BuildKit to pull and use the specified version of the Dockerfile frontend. Features like --mount work reliably with this.
Cache Mounts — A Revolution in Dependency Install Speed
This is the feature with the most noticeable impact. Tools like apt, npm, go mod, and pip all have their own cache directories. When layer cache is invalidated, these caches are lost too, causing everything to be downloaded from scratch. --mount=type=cache persists these cache directories across builds.
# syntax=docker/dockerfile:1.7
FROM node:20-alpine AS deps
WORKDIR /app
COPY package.json package-lock.json ./
RUN --mount=type=cache,target=/root/.npm \
npm ci
Because the .npm directory is mounted as a cache, even if package.json changes slightly and invalidates the layer cache, most package tarballs are reused from the local cache.
For Go:
# syntax=docker/dockerfile:1.7
FROM golang:1.22-alpine AS builder
WORKDIR /app
COPY go.mod go.sum ./
RUN --mount=type=cache,target=/go/pkg/mod \
go mod download
COPY . .
RUN --mount=type=cache,target=/go/pkg/mod \
--mount=type=cache,target=/root/.cache/go-build \
CGO_ENABLED=0 go build -o /out/server ./cmd/server
Both the module cache and build cache are persisted. Clean builds feel several times faster starting from the second run.
For apt, it is slightly different. Debian/Ubuntu bases are configured to clear the package cache by default, so you temporarily disable that and attach a cache mount:
# syntax=docker/dockerfile:1.7
FROM ubuntu:22.04
RUN rm -f /etc/apt/apt.conf.d/docker-clean && \
echo 'Binary::apt::APT::Keep-Downloaded-Packages "true";' \
> /etc/apt/apt.conf.d/keep-cache
RUN --mount=type=cache,target=/var/cache/apt,sharing=locked \
--mount=type=cache,target=/var/lib/apt,sharing=locked \
apt-get update && apt-get install -y --no-install-recommends \
build-essential libpq-dev
Build Secrets — Keeping Them Out of the Image
Let’s revisit what was briefly covered in Part 10. --mount=type=secret exposes files only during the build and does not leave them in the image.
# syntax=docker/dockerfile:1.7
FROM alpine:3.20
RUN --mount=type=secret,id=npmrc,target=/root/.npmrc \
--mount=type=cache,target=/root/.npm \
npm install
docker build \
--secret id=npmrc,src=$HOME/.npmrc \
-t myapp:1.4.2 .
The id in --secret and the id in the Dockerfile must match to be linked. After the build finishes, the mount disappears and .npmrc is not present in the final image. Even docker history shows no trace.
For SSH keys (when pulling dependencies from private Git repos), use type=ssh:
# syntax=docker/dockerfile:1.7
FROM alpine:3.20
RUN --mount=type=ssh \
apk add --no-cache git openssh && \
git clone git@github.com:private/repo.git /src
docker build --ssh default=$SSH_AUTH_SOCK -t myapp:1.4.2 .
This connects to the host’s SSH agent, so authentication works without embedding keys.
Inline Cache and Registry Cache
CI machines change every time, so local cache is not preserved. BuildKit provides the ability to use a remote registry as a cache store.
Inline Cache — Cache Embedded in the Image
docker buildx build \
--cache-to=type=inline \
-t harbor.example.com/team/myapp:1.4.2 \
--push .
type=inline includes cache metadata in the final image. The next build references this.
docker buildx build \
--cache-from=harbor.example.com/team/myapp:1.4.2 \
-t harbor.example.com/team/myapp:1.4.3 \
--push .
Registry Cache — Separate Cache Image
The more commonly used approach in practice is type=registry cache. Cache is pushed/pulled as a separate tag from the main image.
docker buildx build \
--cache-to=type=registry,ref=harbor.example.com/team/myapp:buildcache,mode=max \
--cache-from=type=registry,ref=harbor.example.com/team/myapp:buildcache \
-t harbor.example.com/team/myapp:1.4.2 \
--push .
mode=max pushes all intermediate layers to the cache. mode=min stores only the final layers, which is smaller but has lower cache reuse rates.
For GitHub Actions, type=gha (GitHub Actions built-in cache) is also useful:
- uses: docker/build-push-action@v5
with:
push: true
tags: ghcr.io/myorg/myapp:${{ github.sha }}
cache-from: type=gha
cache-to: type=gha,mode=max
buildx — Multi-Architecture Builds
Developer laptops on Apple Silicon (ARM64) while production servers run x86 (AMD64) is a common scenario. To serve both architectures under the same image, buildx is needed.
Creating a Builder
docker buildx create --name multiarch --use
docker buildx inspect --bootstrap
--use sets it as the default builder. inspect --bootstrap spins up the necessary Docker-in-Docker container to prepare the builder.
QEMU — Emulating Other Architectures
To build ARM64 on an x86 host, QEMU is needed. The latest Docker Desktop includes it by default; on Linux hosts, a one-time registration is required:
docker run --privileged --rm tonistiigi/binfmt --install all
Then pass the architecture list via --platform:
docker buildx build \
--platform linux/amd64,linux/arm64 \
-t harbor.example.com/team/myapp:1.4.2 \
--push .
The build flow visualized:
flowchart TB
CMD["docker buildx build --platform linux/amd64,linux/arm64"] --> BK["BuildKit (in Docker-in-Docker builder)"]
BK --> QEMU["QEMU emulator"]
QEMU --> AMD["linux/amd64 build"]
QEMU --> ARM["linux/arm64 build"]
AMD --> MAN["Manifest list assembly"]
ARM --> MAN
MAN --> REG["Registry push"]
Solving the Slowness of Emulation
QEMU emulation is slow. Building ARM64 on x86 takes several times longer than native. Two alternatives exist:
- Native builder pool — Prepare separate AMD64 and ARM64 hosts so
buildxdistributes native buildsdocker buildx create --name multiarch \ --node arm-node --platform linux/arm64 \ ssh://user@arm-host docker buildx create --append --name multiarch \ --node amd-node --platform linux/amd64 - Cross-compilation — For languages like Go where cross-compilation is easy, compile natively and only vary the base image per platform
# syntax=docker/dockerfile:1.7 FROM --platform=$BUILDPLATFORM golang:1.22 AS builder ARG TARGETOS TARGETARCH WORKDIR /app COPY . . RUN CGO_ENABLED=0 GOOS=$TARGETOS GOARCH=$TARGETARCH \ go build -o /out/server ./cmd/server FROM alpine:3.20 COPY --from=builder /out/server /server ENTRYPOINT ["/server"]
BUILDPLATFORM is the builder’s native platform. TARGETPLATFORM/TARGETOS/TARGETARCH are the target platforms. This combination is the standard pattern for cross-compilation.
Build Arguments and Environment Branching
ARG is a build-time variable. It differs from ENV inside the image.
# syntax=docker/dockerfile:1.7
FROM node:20-alpine AS builder
ARG NODE_ENV=production
RUN npm ci --omit=dev && npm run build:${NODE_ENV}
docker build --build-arg NODE_ENV=staging -t myapp:staging .
An important caveat: do not put secrets in ARG. Values passed via --build-arg can remain in the image metadata. Secrets must always be passed via --mount=type=secret.
Build Provenance
BuildKit can attach provenance attestations recording “which Dockerfile and which build command produced this image.”
docker buildx build \
--provenance=true \
--sbom=true \
-t harbor.example.com/team/myapp:1.4.2 \
--push .
docker buildx imagetools inspect harbor.example.com/team/myapp:1.4.2
For organizations that need to meet SLSA (Supply-chain Levels for Software Artifacts) requirements, this feature is essential. Supply chain evidence follows each image.
Practical Template — All-in-One
Here is an example one-liner for CI with buildx:
docker buildx build \
--platform linux/amd64,linux/arm64 \
--cache-from=type=registry,ref=harbor.example.com/team/myapp:buildcache \
--cache-to=type=registry,ref=harbor.example.com/team/myapp:buildcache,mode=max \
--provenance=true --sbom=true \
--secret id=gh_token,env=GH_TOKEN \
--tag harbor.example.com/team/myapp:${VERSION} \
--tag harbor.example.com/team/myapp:${VERSION}-${SHA} \
--push .
Multi-architecture, registry cache, SBOM, provenance, secrets, and multi-tagging are all packed into this single line. Setting up this template early in a project makes future expansion easy.
Where We Stand
Leveraging BuildKit’s core features improves build time, security, and reproducibility simultaneously. Now we move to running these well-baked images reliably in production for the long haul.
The next part covers production best practices. HEALTHCHECK, graceful shutdown, log drivers, resource limits, init processes (tini/dumb-init) — the settings that determine whether operations survive or fail.

Loading comments...