Public entries tagged #networking

v1.1 is out! - a native Wireshark frontend for Sailfish OS.

Live packet capture · Protocol tree · Hex dump · Follow TCP Stream · Save .pcapng · Interface picker · BPF filters

Built on Wireshark 3.6.24 + Qt 5.6/Silica QML. Because your phone runs Linux and should act like it.

build.sailfishos.org/package/s

Continue reading →

⚡ TCP_NODELAY — The one flag every game server and trading system sets
By default, TCP is trying to be clever. Too clever.

It uses an algorithm called Nagle’s Algorithm — introduced in 1984 — that buffers small packets and waits before sending them. The idea: bundle multiple small writes into one bigger packet to reduce network overhead.
Sounds smart. In the wrong context, it’s a latency killer. 💀

🎮 Why it hurts game servers
Your game sends a tiny position update — 20 bytes. Nagle says: “Wait, maybe more data is coming.” So it holds the packet for up to 200ms hoping to bundle it.
In a first-person shooter, 200ms feels like an eternity.

📈 Why it hurts trading systems
A market order hits your server. Nagle buffers it. Your competitor’s order lands 40ms earlier. You missed the trade.
In high-frequency trading, microseconds are money.

🔧 The fix is one line:
int flag = 1;
setsockopt(fd, IPPROTO_TCP, TCP_NODELAY, &flag, sizeof(flag));

Every packet ships immediately. No buffering, no waiting.

⚠️ One caveat: TCP_NODELAY increases network overhead for chatty protocols. For bulk file transfers or HTTP it’s usually wrong. For real-time systems it’s almost always right.

🐧 Sometimes the smartest thing your OS can do is get out of the way.

Continue reading →

(netmcr.uk/) is on again this Thursday (12th) in .

The talk will be by Mark Tearle, titled:

‘It is a disaster! Reacting to the Unexpected’*

Mark has flown across the world, avoiding flight disruptions, to present an interactive talk about disasters that befall networks, data centres and telecommunications infrastructure across the globe. A curated set of incidents will be discussed and the question posed - how would you or your organisation respond?

Go and chat with some nice people, have some weird 🍻, and if you're hungry, 🍔 and 🍟!

Continue reading →

Our Mastodon instance "burningboard.net" now internally **ONLY** uses the Internet Protocol in Version 6. I did successfully migrate away from any RFC1918 addresses in any of the internal infrastructure connections.

Nginx -> Mastodon: IPv6
Mastodon -> PostgreSQL: IPv6
Mastodon -> Opensearch: IPv6
Mastodon -> Sidekiq: IPv6
Mastodon -> Loki: IPv6
Sidekiq -> PostgreSQL: IPv6
Prometheus -> Mastodon: IPv6

All using globally routed unique addresses and proper routing and packet filtering from "pf" (FreeBSD).

Outbound connections to legacy hosts (for example for Federation) uses NAT64 over Tayga.

Inbound the Nginx is the only component, that supports IPv4 on NAT on a best-effort approach. But I refuse to put a lot of work into this. We have 2026 and it's a dying, smelly protocol, that I don't even monitor anymore.

If someone looks at the Firewall rules.. Yes, we do run a (private) Factorio Server on our Mastodon system :factorio:

@tux

Continue reading →

🐄 The Thundering Herd Problem — when your server stampedes itself

Imagine 10 worker processes, all waiting on the same socket for new connections. One connection arrives — and the kernel wakes up ALL 10 workers at once.

😴 → ⚡ → 🐄🐄🐄🐄🐄🐄🐄🐄🐄🐄

Only one wins. The other 9 realize there’s nothing to do and go back to sleep. Pure CPU waste — and under burst traffic it gets ugly fast.

🔧 Fix #1 — Kernel 2.6:
Linux added an internal mutex to accept(). Only one process gets woken up. Better, but all workers still share one socket and one queue.

⚡ Fix #2 — SO_REUSEPORT (Kernel 3.9):
Each worker gets its own socket. The kernel decides who gets the connection before waking anyone up. No stampede, no wasted wake-ups.

💡 The herd finally learned to queue. 🐧

Continue reading →

Linux SO_REUSEPORT: The Secret Weapon for High-Performance Networking

Ever wondered how modern servers handle millions of connections without breaking a sweat? One underrated hero is the Linux socket option SO_REUSEPORT.

Introduced in Linux 3.9, it allows multiple sockets to bind to the same IP and port — enabling true kernel-level load balancing across workers, with no proxy layer needed.
Why it matters:

🔀 Kernel distributes connections across all bound sockets

⚡ Each worker gets connections directly — no bottleneck

🔄 New workers can bind before old ones shut down → zero-downtime restarts

Used by nginx, HAProxy, Node.js cluster, and high-frequency trading systems alike.
One socket option. Massive throughput gains. 🐧

Continue reading →

Subscribe to #networking entries via RSS feed