Skip to main content

Command Palette

Search for a command to run...

How Linux networking configuration evolved from ifconfig to systemd

A deep dive into 25+ years of Linux networking tools—from net-tools and ifconfig to iproute2, systemd-networkd, and modern DNS resolution with systemd

Updated
11 min read
How Linux networking configuration evolved from ifconfig to systemd

If you've been working with Linux for more than a few years, you've probably noticed that networking configuration has gone through some serious changes. What started as simple text files and a handful of commands has evolved into a complex ecosystem of tools, daemons, and abstractions. And if you're like me, you've probably typed ifconfig only to be met with "command not found" on a fresh Ubuntu install at least once.

Let's walk through how we got here—from the early days of net-tools to the systemd-dominated present—and understand why these changes happened in the first place.

THE OLD GUARD: NET-TOOLS AND FRIENDS

Back in the day (we're talking late 90s through early 2010s), Linux networking was straightforward. You had a small set of tools that did specific jobs:

  • ifconfig - configure network interfaces

  • route - manage routing tables

  • netstat - network statistics and connections

  • arp - ARP cache manipulation

  • /etc/resolv.conf - DNS configuration (literally just a text file)

This was the net-tools package, and it worked. Sort of. The problem was that these tools were designed for a simpler time when Linux networking meant "configure eth0 with a static IP and you're done." As networking grew more complex—VLANs, bridges, tunnels, multiple routing tables, policy-based routing—net-tools started showing its age.

Here's what a typical interface configuration looked like:

# Bring up eth0 with a static IP
ifconfig eth0 192.168.1.100 netmask 255.255.255.0 up
​
# Add a default route
route add default gw 192.168.1.1
​
# Check what you just did
ifconfig eth0
route -n

Simple enough for basic use cases, but try doing anything advanced and you'd quickly hit walls. Want to create a VLAN? You'd need vconfig. Bridge interfaces? That's brctl. The tooling was fragmented, and the underlying kernel interfaces (ioctls) these tools used were showing their limitations.

ENTER IPROUTE2: THE MODERNIZATION

Around 2003, the iproute2 suite started gaining traction. The crown jewel was the ip command—a single unified tool that could handle interfaces, routing, tunnels, and more. It was built on top of the newer netlink socket interface, which gave it more power and flexibility than the old ioctl-based tools.

The same configuration from above now looks like this:

# Bring up eth0 with a static IP
ip addr add 192.168.1.100/24 dev eth0
ip link set eth0 up
​
# Add a default route
ip route add default via 192.168.1.1
​
# Check what you just did
ip addr show eth0
ip route show

Notice the CIDR notation (/24 instead of netmask 255.255.255.0)—this was one of many improvements. But more importantly, ip could do things that net-tools simply couldn't:

# Create a VLAN
ip link add link eth0 name eth0.100 type vlan id 100
​
# Set up policy-based routing with multiple routing tables
ip rule add from 10.0.0.0/8 table 100
ip route add default via 192.168.2.1 table 100
​
# Configure network namespaces (crucial for containers)
ip netns add production
ip link set eth1 netns production

This last feature—network namespaces—became absolutely critical for containerization. Docker, Kubernetes, and basically every modern container runtime relies on network namespaces to isolate network stacks between containers.

By the mid-2010s, major distributions started deprecating net-tools. Debian and Ubuntu stopped installing it by default. Red Hat marked it as deprecated. The message was clear: learn ip or get left behind.

INTERFACE MANAGEMENT GETS COMPLICATED

So now we had better low-level tools, but who was actually calling them? In the early days, you'd have distribution-specific scripts:

  • Debian/Ubuntu: /etc/network/interfaces file, managed by ifupdown

  • Red Hat/CentOS: /etc/sysconfig/network-scripts/ifcfg-* files, managed by network service

  • SUSE: YaST with its own configuration format

Each distribution did things differently, and migrating configurations between distros was painful. Worse, these systems were designed for static configurations—they struggled with dynamic scenarios like laptops moving between networks or cloud instances with changing metadata.

Two major players emerged to solve this:

NetworkManager

NetworkManager arrived to handle dynamic networking scenarios. It was perfect for desktops and laptops that needed to switch between WiFi networks, handle VPNs, and deal with frequent network changes. It introduced D-Bus APIs for managing connections and provided both GUI and CLI tools:

# List connections
nmcli connection show
​
# Connect to WiFi
nmcli device wifi connect "MyNetwork" password "MyPassword"
​
# Create a new static connection
nmcli connection add con-name "static-eth0" \
  ifname eth0 type ethernet \
  ip4 192.168.1.100/24 gw4 192.168.1.1

NetworkManager became the default on Fedora, RHEL, and Ubuntu Desktop. It's great for interactive systems but can be overkill for servers with static configurations.

systemd-networkd

When systemd took over init, it brought its own networking daemon: systemd-networkd. Unlike NetworkManager, it's designed for servers and embedded systems with relatively static configurations. Configuration lives in .network files under /etc/systemd/network/:

# /etc/systemd/network/10-static-eth0.network
[Match]
Name=eth0
​
[Network]
Address=192.168.1.100/24
Gateway=192.168.1.1
DNS=8.8.8.8
DNS=8.8.4.4

Then enable and start it:

systemctl enable --now systemd-networkd

The systemd approach is declarative and integrates cleanly with the rest of the systemd ecosystem. Cloud images (especially for AWS, GCP, Azure) often use systemd-networkd because it's lightweight and predictable.

Here's where it gets interesting: on a modern Linux system, you might have multiple things trying to manage networking. You could have systemd-networkd managing physical interfaces while docker manages bridge interfaces and NetworkManager handles WiFi. They usually stay out of each other's way, but conflicts can happen.

DNS RESOLUTION: FROM SIMPLE TO... SOPHISTICATED?

Now let's talk about DNS, because this has gotten wild.

The old way: /etc/resolv.conf

For decades, DNS configuration was dead simple. You edited /etc/resolv.conf:

nameserver 8.8.8.8
nameserver 8.8.4.4
search example.com

Done. Every application used the standard resolver library (libc), which read this file and made DNS queries. Simple, predictable, and easy to debug.

The problem with simple

But simple broke down in modern scenarios:

  • Split DNS: VPN connections might need different DNS servers for internal domains

  • DNSSEC: validating DNS responses cryptographically

  • DNS over TLS/HTTPS: encrypting DNS queries for privacy

  • mDNS/LLMNR: local network name resolution (think .local domains)

  • Per-interface DNS: different DNS servers for different network interfaces

Applications started implementing their own DNS logic. Browsers added DNS-over-HTTPS. VPN clients fought over /etc/resolv.conf. It was chaos.

systemd-resolved to the rescue (sort of)

systemd-resolved came along to centralize all this complexity. It's a local DNS resolver that sits between applications and actual DNS servers:

On a systemd-resolved system, /etc/resolv.conf becomes a symlink to a stub file:

$ ls -l /etc/resolv.conf
lrwxrwxrwx 1 root root 39 Jan 15 10:23 /etc/resolv.conf -> ../run/systemd/resolve/stub-resolv.conf
​
$ cat /etc/resolv.conf
nameserver 127.0.0.53  # Points to systemd-resolved

All DNS queries go to 127.0.0.53, where systemd-resolved handles them intelligently. The resolvectl command (which replaced the older systemd-resolve) is your main interface for managing and debugging DNS:

# Check overall DNS status and per-interface DNS servers
resolvectl status
​
# Query a domain and see which server answered
resolvectl query example.com
​
# Check DNSSEC validation for a domain
resolvectl query --type=A --class=IN example.com
​
# Flush DNS cache
resolvectl flush-caches
​
# Get statistics (cache hits, transaction counts)
resolvectl statistics

Configuring DNS with systemd-networkd

The most common way to configure DNS is through systemd-networkd's .network files. You can set DNS servers per-interface:

# /etc/systemd/network/10-eth0.network
[Match]
Name=eth0
​
[Network]
DHCP=no
Address=192.168.1.100/24
Gateway=192.168.1.1
​
# Primary and fallback DNS servers
DNS=1.1.1.1
DNS=1.0.0.1
​
# Domains to search (for short hostnames)
Domains=internal.company.com
​
# Route only specific domains through these DNS servers (split DNS)
Domains=~internal.company.com ~vpn.company.com

The ~ prefix on domains means "route DNS queries for this domain through this interface's DNS servers." This is crucial for VPN scenarios where you want corporate domains resolved by corporate DNS, but everything else goes to your regular DNS.

After editing, restart systemd-networkd:

systemctl restart systemd-networkd

Global DNS configuration with resolved.conf

For system-wide DNS settings, edit /etc/systemd/resolved.conf:

[Resolve]
# Fallback DNS if no interface provides DNS servers
DNS=8.8.8.8 1.1.1.1
​
# Fallback DNS for specific domains
Domains=~.
​
# Enable DNSSEC validation (yes/allow-downgrade/no)
DNSSEC=allow-downgrade
​
# Enable DNS over TLS (yes/opportunistic/no)
DNSOverTLS=opportunistic
​
# Enable mDNS for .local domains
MulticastDNS=yes
​
# Enable LLMNR for local network name resolution
LLMNR=yes
​
# DNS stub listener address (default: 127.0.0.53)
DNSStubListener=yes
​
# Cache entries
Cache=yes
CacheFromLocalhost=no

After changing resolved.conf, restart the service:

systemctl restart systemd-resolved

Split DNS for VPN scenarios

Here's a real-world example: you're connected to a corporate VPN via tun0 and want corporate domains (*.corp.example.com) resolved through the VPN's DNS, but everything else through Cloudflare:

# /etc/systemd/network/10-vpn.network
[Match]
Name=tun0
​
[Network]
DNS=10.10.10.10  # Corporate DNS server
Domains=~corp.example.com ~internal.example.com

The routing domains (prefixed with ~) tell systemd-resolved: "queries for these domains go to this interface's DNS servers only." Check if it's working:

$ resolvectl status tun0
Link 4 (tun0)
      Current Scopes: DNS
       LLMNR setting: yes
MulticastDNS setting: no
      DNSSEC setting: allow-downgrade
    DNSSEC supported: yes
         DNS Servers: 10.10.10.10
          DNS Domain: ~corp.example.com
                      ~internal.example.com
​
# Test it
$ resolvectl query jenkins.corp.example.com
jenkins.corp.example.com: 10.20.30.40       -- link: tun0

That -- link: tun0 confirms the query went through the VPN interface.

Enabling DNSSEC

DNSSEC validation adds cryptographic verification to DNS responses. In resolved.conf:

[Resolve]
DNSSEC=yes  # Strict validation; queries fail if DNSSEC validation fails
# or
DNSSEC=allow-downgrade  # Validate when available, but allow unsigned responses

Check DNSSEC status for a domain:

$ resolvectl query --legend=yes cloudflare.com
cloudflare.com: 104.16.132.229
                104.16.133.229
​
-- Information acquired via protocol DNS in 23.4ms.
-- Data is authenticated: yes

That "authenticated: yes" means DNSSEC validation succeeded.

DNS over TLS (DoT)

To encrypt DNS queries in transit, enable DNS-over-TLS in resolved.conf:

[Resolve]
DNS=1.1.1.1#cloudflare-dns.com 1.0.0.1#cloudflare-dns.com
DNSOverTLS=yes

The #cloudflare-dns.com syntax specifies the TLS server name. Restart and verify:

systemctl restart systemd-resolved
resolvectl status | grep "DNS over TLS"

Note that DoT uses port 853, so your firewall needs to allow it. Also, this is different from DNS-over-HTTPS (DoH), which systemd-resolved doesn't support natively—you'd need something like dnscrypt-proxy for that.

Runtime DNS changes with resolvectl

You can temporarily override DNS settings without editing config files:

# Set DNS servers for eth0 until next reboot
resolvectl dns eth0 8.8.8.8 8.8.4.4
​
# Set routing domains for eth0
resolvectl domain eth0 ~internal.example.com
​
# Revert to defaults
resolvectl revert eth0

These changes don't persist across reboots, which is useful for testing.

Common troubleshooting scenarios

DNS queries timing out? Check if systemd-resolved is actually running:

systemctl status systemd-resolved
journalctl -u systemd-resolved -f  # Follow logs in real-time

VPN DNS not working? Verify your VPN client is setting DNS through systemd-resolved. Some VPN clients (especially older ones) still try to directly edit /etc/resolv.conf, which doesn't work when it's a symlink. You might need to configure the VPN to use resolvectl:

# In VPN up script
resolvectl dns $INTERFACE 10.10.10.10
resolvectl domain $INTERFACE ~corp.example.com

DNS leaking when on VPN? Check which DNS servers are being used:

resolvectl status
# Look for your VPN interface and verify routing domains are set correctly

Want to bypass systemd-resolved entirely? Replace the symlink with a real file:

rm /etc/resolv.conf
echo "nameserver 8.8.8.8" > /etc/resolv.conf
echo "nameserver 8.8.4.4" >> /etc/resolv.conf

But you'll lose all the advanced features—split DNS, DNSSEC, mDNS, etc.

Cache causing issues? Flush it:

resolvectl flush-caches
# Or restart the service entirely
systemctl restart systemd-resolved

This works great when it works. But when it breaks—say, DNS leaks after connecting to a VPN—debugging becomes harder because you now have this abstraction layer between your app and the actual DNS queries. The key is understanding resolvectl status output and watching the journal logs when issues occur.

The alternatives still exist

Not everyone loves systemd-resolved. You'll still find systems using:

  • dnsmasq: lightweight caching DNS proxy, popular in network appliances

  • resolvconf: the older dynamic /etc/resolv.conf updater (not to be confused with resolvectl)

  • plain old /etc/resolv.conf: some sysadmins just want simplicity and manage it manually or via configuration management

WHERE WE ARE TODAY

Modern Linux networking is powerful but complex. On a typical cloud VM running Ubuntu 22.04 or Rocky Linux 9, you might have:

  • iproute2 tools (ip, ss, bridge) for low-level interface management

  • systemd-networkd managing your primary network interface

  • systemd-resolved handling DNS with DNSSEC validation

  • Docker creating its own bridge networks and iptables rules

  • NetworkManager installed but disabled

The key insight is that Linux networking evolved from a monolithic model ("one tool per task, everything global") to a layered, namespaced, and delegated model ("multiple domains of control, isolation by default").

This makes sense given how we use Linux today. We're running containers, connecting to VPNs, managing split-horizon DNS, and dealing with cloud networking that changes dynamically. The tooling had to evolve to support these use cases.

THINGS TO KEEP IN MIND

If you're managing servers, learn the ip command inside and out. Know how to inspect routing tables, check interface statistics, and understand network namespaces. And pick either NetworkManager or systemd-networkd—running both is usually asking for trouble.

If you're debugging DNS issues, understand whether systemd-resolved is in the picture. Check resolvectl status before you start editing /etc/resolv.conf (which might be a symlink anyway).

If you're writing automation, use declarative configurations (systemd-networkd .network files or NetworkManager connection files) rather than calling ip commands in scripts. Your future self will thank you.

The Linux networking stack has gotten more complex, no question. But it's also far more capable than it was 20 years ago. We're now running thousands of isolated network stacks on a single host, dynamically reconfiguring interfaces in response to cloud metadata, and validating DNS responses cryptographically. The old tools couldn't handle that—and now we have tools that can.

Just don't forget to update your muscle memory from ifconfig to ip. That command-not-found error gets old fast.