Table of contents
- Who Supervises the Server’s Processes?
- The Big Picture of systemd
- systemctl — The Standard Command for Service Operations
- Unit File Structure
- Custom Service — Putting Your App on systemd
- journalctl — View All Logs in One Place
- Timers — Replacing cron
- Handling Failed Units
- Containers Mostly Don’t Have systemd
Who Supervises the Server’s Processes?
You SSH into a server, start Nginx, bring up the backend app, and schedule a backup script to run every day at 5 AM. The common thread in all of this is “when, how, and in what order to start and stop processes.”
In the old days of Linux, this job was handled by SysV init — a collection of shell scripts under /etc/init.d/ that determined the boot order. Being shell-script-based made it flexible, but parallel execution was difficult, dependency expression was poor, and logging was everyone’s own problem. Investigating “why didn’t this service start?” meant reading scripts line by line.
systemd redrew this world. Most modern Linux distributions (Ubuntu 16+, Debian 8+, RHEL/CentOS 7+, virtually all except container base images) have adopted systemd as their default init system. Declarative unit files, parallel boot, centralized logging, dependency graphs — all the tools you need come bundled together.
The Big Picture of systemd
While systemd is called an “init system,” it’s actually a product suite encompassing almost everything after Linux boots. Here’s a one-page summary of the core components.
flowchart TB
KERNEL["Linux Kernel"] --> SYSD["systemd (PID 1)"]
SYSD --> UNITS["Units<br/>(.service / .socket / .timer / .target ...)"]
UNITS --> SVC[".service<br/>Daemons & processes"]
UNITS --> SOC[".socket<br/>Socket-based activation"]
UNITS --> TMR[".timer<br/>cron replacement"]
UNITS --> TGT[".target<br/>runlevel replacement"]
SYSD --> JOURNALD["systemd-journald<br/>Centralized logging"]
SYSD --> LOGIND["systemd-logind<br/>Login sessions"]
SYSD --> RESOLVED["systemd-resolved<br/>DNS cache"]
SYSD --> TIMESYNCD["systemd-timesyncd<br/>NTP"]
Right after boot, the kernel launches systemd as PID 1, and all subsequent user-space processes start as children of systemd. Multiple subsystems (journald, logind, resolved, timesyncd) each take on their respective roles.
In this post, we’ll dig into only what’s most commonly used in practice: .service, systemctl, journalctl, and timers. The rest can be looked up in the official documentation when needed.
systemctl — The Standard Command for Service Operations
systemctl is the entry point for working with units. Starting, stopping, enabling, and checking status all go through here.
The five most frequently used commands:
# Check status
sudo systemctl status nginx
# Start, stop, restart immediately
sudo systemctl start nginx
sudo systemctl stop nginx
sudo systemctl restart nginx
# Reload configuration only (process stays running)
sudo systemctl reload nginx
# Register/unregister for automatic start at boot
sudo systemctl enable nginx
sudo systemctl disable nginx
# Enable + start immediately in one go
sudo systemctl enable --now nginx
The confusing part here is the difference between start and enable. start launches the service right now, while enable registers it to start automatically on the next boot. In practice, enable --now is often used to do both at once.
The status output is quite rich, so let’s walk through it.
sudo systemctl status nginx
# ● nginx.service - A high performance web server
# Loaded: loaded (/lib/systemd/system/nginx.service; enabled; vendor preset: enabled)
# Active: active (running) since Mon 2026-04-20 10:00:12 UTC; 2h 15min ago
# Docs: man:nginx(8)
# Main PID: 12345 (nginx)
# Tasks: 5 (limit: 4915)
# Memory: 12.8M
# CPU: 234ms
# CGroup: /system.slice/nginx.service
# ├─12345 "nginx: master process"
# ├─12346 "nginx: worker process"
# └─12347 "nginx: worker process"
#
# Apr 20 10:00:12 host systemd[1]: Started A high performance web server.
# Apr 20 10:00:12 host nginx[12345]: nginx: [warn] conflicting server name ...
This output is packed with useful information. The process tree (CGroup), memory and CPU usage, and the last few log lines. “Check status first” should become an almost reflexive habit early in troubleshooting.
Unit File Structure
.service unit files are in a simple INI format. The key distinction is the separation between system-managed files and user-modified files.
flowchart LR
A["/lib/systemd/system/<br/>Distribution default units"]
B["/etc/systemd/system/<br/>Admin custom units"]
C["/run/systemd/system/<br/>Runtime dynamic units"]
A -.->|"override"| B
C -.->|"override"| B
B -->|"highest priority"| FINAL["Final applied unit"]
When the same unit name exists in multiple directories, /etc/systemd/system/ takes the highest priority. So the convention is to leave distribution-provided originals untouched and place your customizations in /etc/systemd/system/.
As an example, let’s look at Nginx’s unit file.
cat /lib/systemd/system/nginx.service
[Unit]
Description=A high performance web server and a reverse proxy server
After=network.target
Documentation=man:nginx(8)
[Service]
Type=forking
PIDFile=/run/nginx.pid
ExecStartPre=/usr/sbin/nginx -t
ExecStart=/usr/sbin/nginx
ExecReload=/usr/sbin/nginx -s reload
ExecStop=/bin/kill -s QUIT $MAINPID
PrivateTmp=true
Restart=on-failure
[Install]
WantedBy=multi-user.target
There are three sections.
[Unit]: Metadata and dependencies for the unit.After=means “must start before this unit,”Requires=means “must come up together”[Service]: How the actual process is started.Type=controls fork behavior,ExecStart=is the start command,Restart=is the restart policy on abnormal exit[Install]: Which target to link to whenenabled.WantedBy=multi-user.targetmeans “comes up together in the normal server boot state”
Values for Type= include simple (default, no fork), forking (daemon-style), oneshot (execute once then exit), notify (ready notification via sd_notify), exec (wait for fork only), and more. For most modern apps, simple or exec is appropriate.
override.conf — Modifying Parts Without Touching the Original
When you want to modify the distribution’s original unit, don’t edit the file directly — use the drop-in directory.
sudo systemctl edit nginx
This command opens /etc/systemd/system/nginx.service.d/override.conf in an editor. Just write the directives you want to override.
[Service]
# Clear original environment variables and set new ones
Environment=
Environment="NGINX_WORKER_CONNECTIONS=2048"
Restart=always
RestartSec=5s
For list-type directives like Environment=, remember the pattern of resetting with an empty value first then setting the new value. Otherwise, the original and the override will merge, leaving unexpected values.
After editing, a daemon reload is needed. systemctl edit does this automatically, but if you edited the file directly, run:
sudo systemctl daemon-reload
sudo systemctl restart nginx
Custom Service — Putting Your App on systemd
Now let’s register an app you built as a systemd service. Suppose you want to register a simple API server built with Node.js as a service called myapp.
# Path where the app is deployed
/opt/myapp/server.js
Create the unit file.
sudo tee /etc/systemd/system/myapp.service <<'EOF'
[Unit]
Description=My Node.js API
After=network.target
[Service]
Type=simple
User=myapp
Group=myapp
WorkingDirectory=/opt/myapp
ExecStart=/usr/bin/node /opt/myapp/server.js
Restart=on-failure
RestartSec=3s
Environment=NODE_ENV=production
Environment=PORT=8080
# Resource limit examples
LimitNOFILE=65535
MemoryMax=512M
[Install]
WantedBy=multi-user.target
EOF
Then register, start, and enable auto-start in one go.
sudo systemctl daemon-reload
sudo systemctl enable --now myapp
sudo systemctl status myapp
Here are some practical tips.
- Always specify
User=. The default is root. Running your app as root is a source of incidents in production. If the account doesn’t exist, create it withsudo useradd -r -s /usr/sbin/nologin myapp Restart=on-failureis the recommended default.alwaysrestarts even on normal exit (exit 0), so only use it when the intent is clear- Environment variables can be set with
Environment=or read from a file withEnvironmentFile=/etc/myapp.env. Files are better for secrets ExecStartmust use absolute paths.$PATHmay not be set as expected- Logs go to journald automatically when output goes to stdout/stderr. No need for separate file logging —
journalctlcan read it
journalctl — View All Logs in One Place
The point mentioned earlier — “stdout/stderr automatically goes to journald” — is one of systemd’s powerful strengths. Previously, each service scattered logs in different places like /var/log/nginx/, /var/log/mysql/, etc. journalctl gathers them into a single interface.
The most frequently used combinations:
# Follow recent logs continuously (like tail -f)
sudo journalctl -u nginx -f
# Just the last 50 lines
sudo journalctl -u nginx -n 50
# Specific time period
sudo journalctl -u nginx --since "1 hour ago"
sudo journalctl -u nginx --since "2026-04-20 09:00" --until "2026-04-20 10:00"
# Specific service + priority level (err and above)
sudo journalctl -u myapp -p err
# By boot (current boot)
sudo journalctl -b
# Previous boot
sudo journalctl -b -1
# Kernel messages only
sudo journalctl -k
Just memorize -u <unit>, -f (follow), and --since / --until and you’ll cover most cases.
If you’re worried about logs growing too large, set rotation policies in /etc/systemd/journald.conf. The defaults vary depending on whether /var/log/journal exists on the distribution.
# /etc/systemd/journald.conf
[Journal]
Storage=persistent
SystemMaxUse=1G
SystemMaxFileSize=100M
MaxRetentionSec=2week
After changing settings, restart journald.
sudo systemctl restart systemd-journald
Timers — Replacing cron
cron is an old friend, but it has many limitations. It’s hard to add retry policies on failure, and environment variable, log, and permission controls are weak. systemd timers solve these problems with .service + .timer pairs.
As an example, let’s create a backup script that runs every day at 3 AM.
/etc/systemd/system/backup.service — what to do
[Unit]
Description=Daily database backup
After=postgresql.service
[Service]
Type=oneshot
User=backup
ExecStart=/usr/local/bin/backup.sh
/etc/systemd/system/backup.timer — when to do it
[Unit]
Description=Run daily database backup at 03:00
[Timer]
OnCalendar=*-*-* 03:00:00
Persistent=true
RandomizedDelaySec=5m
[Install]
WantedBy=timers.target
Then enable it.
sudo systemctl daemon-reload
sudo systemctl enable --now backup.timer
# Check timer status
systemctl list-timers
Running list-timers shows the next execution time in a table. Persistent=true is an option that “catches up on missed executions after boot if the machine was off when the timer should have fired” — a feature cron doesn’t have. RandomizedDelaySec adds a random delay of 0-5 minutes instead of firing at exactly the scheduled time, preventing timer stampedes.
OnCalendar supports cron-like notation. A few common ones to keep handy:
*-*-* 03:00:00 Every day at 03:00
Mon..Fri 09:00 Weekdays at 9 AM
*-*-1 00:00:00 1st of every month at midnight
hourly Every hour on the hour
daily Every day at midnight
weekly Every Monday at midnight
Handling Failed Units
A common task during operations is scanning for “which services are in a failed state.”
systemctl --failed
# UNIT LOAD ACTIVE SUB DESCRIPTION
# myapp.service loaded failed failed My Node.js API
If there’s a failed unit, the cause is almost always in the logs.
sudo systemctl status myapp
sudo journalctl -u myapp --since "10 minutes ago"
For boot failures specifically, systemd-analyze blame lets you see “which units slowed down the boot.”
systemd-analyze blame | head
# 45.200s docker.service
# 12.400s snapd.service
# 6.200s systemd-networkd-wait-online.service
critical-chain draws the bottleneck chain in the boot path.
systemd-analyze critical-chain
This is useful as a first step in tuning large servers.
Containers Mostly Don’t Have systemd
Let’s address one easily confused point. Most containers don’t have systemd inside. If you docker run ubuntu, there’s no init system, no journald, and PID 1 is just the app process. This aligns with the container design philosophy (one container = one process).
So the line is clear: systemd is a tool for process management on hosts (VMs and bare metal). Container orchestration is handled by Kubernetes or Docker Compose. Different roles.
There are exceptions. systemd-nspawn and some OS containers (LXC) run systemd inside them as-is. But the answer to “why doesn’t systemctl work inside my Docker base image?” is “there’s no systemd in there.”
In the next part, we’ll look at Linux package management. We’ll cover how to use apt (Debian/Ubuntu family) and dnf/yum (RHEL family), the concepts of repositories and dependencies, and clarify frequently confused topics like apt-get vs apt.

Loading comments...