Skip to content
ioob.dev
Go back

Linux Basics Part 5 — Network Tools

· 8 min read
Linux Series (5/8)
  1. Linux Basics Part 1 — The Shell and Filesystem Structure
  2. Linux Basics Part 2 — File Permissions and Users/Groups
  3. Linux Basics Part 3 — Processes and Signals
  4. Linux Basics Part 4 — Text Processing and Pipes
  5. Linux Basics Part 5 — Network Tools
  6. Linux Basics Part 6 — Systemd and Service Management
  7. Linux Basics Part 7 — Package Management
  8. Linux Basics Part 8 — Bash Scripting Basics
Table of contents

Table of contents

The First Commands You Type After Landing on a Server

The first thing a developer does after landing on a remote server is almost always the same. “Is this server connected to the internet?”, “Which ports are open?”, “Is DNS working?”, “Is the API server responding?” The commands you type to answer these four questions are the subject of this post.

Linux has dozens of network tools, but the ones you actually use daily number around 10. curl, wget, ss, netstat, ping, traceroute, dig, nslookup, ssh. Just understanding how these tools overlap and differ from each other will speed up your troubleshooting.

Below is a map that groups these tools by “which layer they inspect.”

flowchart TB
    subgraph L7["Application Layer"]
        CURL["curl / wget<br/>HTTP requests & downloads"]
        SSH["ssh<br/>Remote shell & file transfer"]
    end
    subgraph L4["Transport Layer / Sockets"]
        SS["ss / netstat<br/>Open ports & connection state"]
    end
    subgraph L3["Network Layer"]
        PING["ping<br/>Reachability"]
        TR["traceroute<br/>Path tracing"]
    end
    subgraph DNS["Name Resolution"]
        DIG["dig / nslookup<br/>DNS records"]
    end

    CURL --> DIG
    CURL --> SS
    SSH --> SS
    PING --> DIG
    TR --> PING

The post follows this order too. We’ll work down from the upper layer (HTTP), examining why each tool is needed and what pitfalls to watch for.

curl — The Swiss Army Knife for HTTP

curl takes a single URL and sends requests using almost any protocol. HTTP, HTTPS, FTP, SMTP, SCP, LDAP, and more. In practice, 95%+ of its use is for HTTP debugging.

The simplest usage is a GET request.

curl https://httpbin.org/get

httpbin.org is a public server built specifically for HTTP debugging. It echoes back the request headers and body as JSON. It’s a great practice partner when first learning curl.

To see headers too, use the -i (include) or -I (HEAD request only) option.

curl -i https://example.com
# HTTP/2 200
# content-type: text/html; charset=UTF-8
# ...body...

curl -I https://example.com
# Only sends HEAD to check response headers

POST requests combine -X POST with -d. When sending a JSON body, you must specify the Content-Type header explicitly.

curl -X POST https://httpbin.org/post \
  -H "Content-Type: application/json" \
  -d '{"name":"ioob","lang":"ko"}'

Here are some practical options that are often overlooked.

curl -v in particular is a debugging fundamental. It shows in one screen whether the TLS handshake failed, which headers were exchanged, and how big the response body is.

curl -v https://api.example.com/health
# * Trying 10.0.0.1:443...
# * Connected to api.example.com
# * TLSv1.3 (OUT), TLS handshake, Client hello (1):
# > GET /health HTTP/2
# > Host: api.example.com
# > User-Agent: curl/8.4.0
# < HTTP/2 200
# < content-type: application/json

When someone reports “the API isn’t working,” the first thing you type is curl -v. One screen separates whether it’s a client code problem, a network problem, or a server problem.

wget — Specialized for Simple Downloads

wget is, as the name suggests, a tool for “getting from the web.” It overlaps with curl in functionality but differs in philosophy. curl is closer to a one-shot request client, while wget is specialized for bulk downloads and mirroring.

wget https://example.com/archive.tar.gz
# Saves the file to the current directory

Frequently used options include the following.

“Isn’t curl enough?” you might think. In practice, uses diverge. API calls and script automation favor curl, while downloading large files or resuming interrupted downloads favor wget for its simpler syntax. Which one to use in container image RUN commands comes down to team conventions.

ss / netstat — Who’s Holding Which Port

The moment inevitably comes when you land on a server and need to check “who is using this port?” Either port 8080 should have an app listening but you get Address already in use, or external access is blocked because the port isn’t open.

netstat is the traditional tool, but it’s old and has been deprecated along with the entire net-tools package. Modern Linux recommends ss (socket statistics) from the iproute2 package. On most recent distributions, ss comes pre-installed.

Here’s the most frequently used combination.

ss -tlnp
# -t: TCP
# -l: LISTEN state
# -n: Numbers instead of names (skip DNS/service name lookups)
# -p: Process information
#
# State   Recv-Q   Send-Q   Local Address:Port   Peer Address:Port   Process
# LISTEN  0        4096     0.0.0.0:22           0.0.0.0:*           users:(("sshd",pid=1234,fd=3))
# LISTEN  0        511      127.0.0.1:8080       0.0.0.0:*           users:(("node",pid=5678,fd=20))

What this output tells you is clear. Port 22 has sshd listening on all interfaces (0.0.0.0), while port 8080 has a Node process listening on localhost only (127.0.0.1). The reason port 8080 can’t be reached externally isn’t the firewall — it’s because the app is bound to localhost. One line reveals it.

Common variations include the following.

# View only established TCP connections
ss -tnp state established

# View only UDP listeners
ss -ulnp

# View a specific port only
ss -tlnp sport = :443

# Summary statistics
ss -s

If you’re familiar with netstat, this mapping table may help.

netstatss
netstat -tlnpss -tlnp
netstat -tnpss -tnp
netstat -rip route
netstat -iip -s link

Unless you’re maintaining legacy scripts that still use netstat, it’s better to write new articles and new Docker images using ss.

ping — Can Packets Reach the Destination?

The principle behind ping is simple. It sends ICMP Echo Request packets to the target and measures the response time of the Echo Reply that comes back. It’s not a tool for “is the server alive?” but rather for “can packets from my host reach that destination?”

ping -c 4 8.8.8.8
# PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.
# 64 bytes from 8.8.8.8: icmp_seq=1 ttl=117 time=3.42 ms
# ...
# --- 8.8.8.8 ping statistics ---
# 4 packets transmitted, 4 received, 0% packet loss, time 3005ms
# rtt min/avg/max/mdev = 3.42/3.51/3.60/0.07 ms

-c 4 means send 4 packets and stop. Without it, ping continues until Ctrl+C.

A few more useful options:

One important caveat: ICMP is commonly blocked by firewalls. Default security groups on AWS EC2 instances and GCP VMs don’t allow ICMP. “Ping doesn’t work” doesn’t necessarily mean the server is down. There are more cases where you should verify via HTTP/TCP instead.

traceroute — Where Is It Getting Blocked?

If ping asks “does it reach?”, traceroute asks “which hops does it pass through? Where does it get blocked?” It measures the RTT to each router along the path.

traceroute www.google.com
# traceroute to www.google.com (142.250.76.100), 30 hops max
#  1  _gateway (192.168.1.1)  1.2 ms  1.1 ms  1.1 ms
#  2  10.0.0.1 (10.0.0.1)  8.3 ms  8.1 ms  8.2 ms
#  3  * * *
#  4  72.14.218.11 (72.14.218.11)  18.2 ms  17.5 ms  17.9 ms
#  ...
# 10  142.250.76.100  25.0 ms  24.8 ms  25.1 ms

A hop showing * * * means “a router that didn’t respond.” It’s not necessarily a failure — many routers block responses to traceroute ICMP/UDP. To distinguish between “packets are actually being dropped” and “simply not responding,” check whether the next hop responds.

On modern distributions, tracepath and mtr (traceroute + ping) are also used. mtr is particularly excellent for network quality diagnosis as it shows real-time packet loss rates per hop.

mtr www.google.com
# HOST: myhost                    Loss%   Snt   Last   Avg  Best  Wrst
#  1. _gateway                     0.0%    10    1.2   1.2   1.1   1.4
#  2. 10.0.0.1                     0.0%    10    8.3   8.2   8.0   8.5
#  3. 72.14.218.11                20.0%    10   18.1  18.3  17.5  19.2

If the 3rd hop shows 20% Loss%, you can narrow down the problem to that segment.

dig / nslookup — Looking Inside DNS

“I’m calling the API with curl but getting Could not resolve host.” This is when you need DNS diagnostics.

dig (Domain Information Groper) is a DNS lookup tool from the BIND package. Its machine-readable output format makes it suitable for scripts too.

dig example.com
# ;; ANSWER SECTION:
# example.com.  86400  IN  A  93.184.216.34

You can also specify a particular record type.

dig MX gmail.com        # Mail records
dig AAAA google.com     # IPv6
dig NS example.com      # Name servers
dig TXT example.com     # TXT (SPF, DKIM, etc.)

Adding the +short option outputs just the value, making it easy to pipe into scripts.

dig +short example.com
# 93.184.216.34

You can also query a specific DNS server directly. This is used to check “my machine’s DNS is acting strange, but does it work with a public DNS?”

dig @8.8.8.8 example.com

nslookup is a similar tool that’s older and has more human-friendly output.

nslookup example.com
# Server:   8.8.8.8
# Address:  8.8.8.8#53
#
# Non-authoritative answer:
# Name:  example.com
# Address: 93.184.216.34

Modern Linux recommends dig, but there are still many container images that only have nslookup. It’s handy to keep both tools in mind.

DNS cache debugging tip: If you keep querying the same domain but the answer doesn’t change, the resolver cache or systemd-resolved might be holding onto the value. Flush the cache with resolvectl flush-caches (systemd-based) or sudo systemd-resolve --flush-caches.

SSH Basics — The Standard for Remote Access

Last is SSH. As long as you use Linux servers, SSH is inescapable. Cloud VMs, internal servers, home servers — nearly all remote access is SSH.

The simplest form:

ssh ioob@10.0.0.5
# Prompts for password, or connects directly if a key is registered

Key-Based Authentication — Forget Passwords

In practice, password authentication is almost never used. You create an SSH key pair (public key/private key) and place the public key on the server and the private key locally. During connection, the server verifies “are you the person with the private key?” through a challenge-response protocol.

The flow from key generation to registration:

sequenceDiagram
    autonumber
    participant Me as My machine
    participant Srv as Remote server

    Me->>Me: ssh-keygen (generates id_ed25519, id_ed25519.pub)
    Me->>Srv: ssh-copy-id user@host<br/>(adds public key to ~/.ssh/authorized_keys)
    Me->>Srv: ssh user@host
    Srv->>Me: Challenge with public key
    Me->>Srv: Response signed with private key
    Srv->>Me: Authentication successful → shell provided

The actual commands are as follows.

# 1) Generate key (Ed25519 recommended. Shorter and faster than RSA)
ssh-keygen -t ed25519 -C "ioob@laptop"
# → ~/.ssh/id_ed25519 (private key), ~/.ssh/id_ed25519.pub (public key)

# 2) Register the public key on the server
ssh-copy-id ioob@10.0.0.5

# 3) From now on, connect without a password
ssh ioob@10.0.0.5

If ssh-copy-id isn’t available in your environment, you can add it manually.

cat ~/.ssh/id_ed25519.pub | ssh ioob@10.0.0.5 "mkdir -p ~/.ssh && cat >> ~/.ssh/authorized_keys"

Never share your private key. Don’t put it in git commits, Slack messages, or Notion documents. Only the public key (.pub) should circulate.

~/.ssh/config — Stop Typing the Same Thing Every Time

As servers multiply, connection information becomes repetitive. Adding aliases in ~/.ssh/config lets you connect with a single line.

# ~/.ssh/config
Host prod-api
    HostName 10.0.0.5
    User ioob
    Port 22
    IdentityFile ~/.ssh/id_ed25519_work

Host bastion
    HostName bastion.example.com
    User ioob

Host prod-db
    HostName 10.0.2.10
    User ioob
    ProxyJump bastion

Now ssh prod-api is all it takes, and ssh prod-db automatically routes through bastion. ProxyJump handles multi-hop access through a jump host (bastion) in one go.

Port Forwarding Basics — Connecting Local and Remote with a Tunnel

SSH is more than just a shell access tool. It’s also a tool for creating arbitrary encrypted TCP tunnels. Local port forwarding is the most representative use.

“I want to connect my local IDE to the remote server’s DB (5432). But I don’t want to expose the DB port externally.” SSH tunneling solves this.

ssh -L 5432:localhost:5432 ioob@prod-api

This command means: Forward traffic coming to my local port 5432, through the SSH connection, to localhost:5432 on the prod-api server. The local IDE connects to jdbc:postgresql://localhost:5432/db, and the traffic reaches the remote DB through the SSH encrypted channel.

flowchart LR
    IDE["Local IDE<br/>localhost:5432"]
    SSH["SSH Tunnel<br/>Encrypted TCP"]
    SRV["prod-api<br/>localhost:5432"]
    DB[("PostgreSQL")]

    IDE -->|"connect"| SSH
    SSH --> SRV
    SRV -->|"loopback"| DB

The reverse direction — forwarding from remote to local — also exists (-R). It’s used when you need the remote server to access a specific port on your development machine. It’s useful to keep a few common forms in mind.

# Local forwarding: local port L → remote target
ssh -L 8080:internal-service:80 bastion

# Remote forwarding: remote port R → local target
ssh -R 9000:localhost:3000 remote-host

# SOCKS proxy (dynamic forwarding)
ssh -D 1080 bastion
# → Use local 1080 as a SOCKS5 proxy

Note that port forwarding should be used in accordance with organizational policies. Directly connecting an IDE to a production DB can be problematic from an audit and governance perspective. Check your team’s policies.

File Transfer — scp and rsync

Tools for transferring files over SSH channels are also commonly used.

# Local → remote
scp ./deploy.sh ioob@prod-api:/tmp/

# Remote → local
scp ioob@prod-api:/var/log/app.log ./

# Entire directory
scp -r ./dist ioob@prod-api:/var/www/

For efficiently synchronizing only the changes, rsync is much faster.

rsync -avz -e ssh ./dist/ ioob@prod-api:/var/www/dist/
# -a: recursive + preserve permissions, etc.
# -v: verbose
# -z: compress during transfer
# -e ssh: use SSH channel (default)

scp copies everything wholesale, while rsync transfers only the changed blocks. In deployment scripts, rsync is more common.

Practical Troubleshooting Sequence

Let’s wrap up by tracing the order in which you’d pull out these tools when facing an “API isn’t working” situation.

flowchart TB
    A["1. Try curl -v"] --> B{"Response received?"}
    B -- "Yes" --> C["If not 200, it's a server app issue"]
    B -- "No" --> D{"Could not resolve host?"}
    D -- "Yes" --> E["Check DNS with dig"]
    D -- "No" --> F{"Connection refused?"}
    F -- "Yes" --> G["Check target port with ss -tlnp"]
    F -- "No" --> H{"Timed out?"}
    H -- "Yes" --> I["Check route with ping / traceroute<br/>Check firewall / security groups"]

Once this thought process becomes second nature, the time spent wandering “why isn’t it working?” drops noticeably. It’s because you can immediately reach for the right tool at each step.


In the next part, we’ll cover systemd and service management. Most processes on Linux servers run as systemd services. We’ll look at managing state with systemctl, understanding unit file structure, and querying logs with journalctl.

-> Part 6: Systemd and Service Management


Related Posts

Share this post on:

Comments

Loading comments...


Previous Post
Linux Basics Part 4 — Text Processing and Pipes
Next Post
Linux Basics Part 6 — Systemd and Service Management