Linode Private Network Speed Tests
In preparation for launching Linsides, which provides services via Linode’s private LAN, I wanted to get an idea of how well Linode’s LAN actually performs. I thought other folks might be interested in this information, so I figured I’d post my results.
First up is network latency. How long does it take a packet to travel from one Linode to another, and then back again? To test this, I used mtr, a handy tool for evaluating network latency.
josh@redacted:~$ mtr -rc 100 192.168.140.180 HOST: redacted Loss% Snt Last Avg Best Wrst StDev 1. 192.168.140.180 0.0% 100 0.3 0.3 0.2 0.4 0.0
So, over 100 samples, the average round-trip time was 0.3 milliseconds, and no trip took more than 0.4 milliseconds. This is certainly very fast, and is pretty much what you’d expect from systems that are likely no more than a few dozen feet from each other.
Next up is network throughput. For this, I used iperf. Step one was getting iperf listening on my test Linode:
root@li101-202:~# iperf -s ------------------------------------------------------------ Server listening on TCP port 5001 TCP window size: 85.3 KByte (default) ------------------------------------------------------------
Step two is to run the test from my primary Linode:
josh@redacted:~$ iperf -c 192.168.140.180 ------------------------------------------------------------ Client connecting to 192.168.140.180, TCP port 5001 TCP window size: 16.0 KByte (default) ------------------------------------------------------------ [ 3] local 192.168.134.41 port 52600 connected with 192.168.140.180 port 5001 [ ID] Interval Transfer Bandwidth [ 3] 0.0-10.0 sec 1.10 GBytes 941 Mbits/sec
So, 941 Mbits/sec. I probably could have tweaked that up a Mbit/sec or two by adjusting the TCP window size, but 941 Mbits/sec is certainly a respectable number (and more or less what you’d expect from a GigE connection). One thing to note: The private interface is an alias on the primary interface. This means that the default cap on outbound bandwidth (which exists to prevent you from accidentally blowing through all your transfer in a couple hours) applies to the private interface as well. The default cap is 50Mbit/sec, but all it takes is a ticket to get that bumped up if needed.
All in all, no real surprises here. The private network performs pretty much as well as you would expect a modern network to perform. It should be noted that this is just one data point, and it’s possible performance fluctuates a bit over time, but I’ve never noticed anything like that. The private network has always been rock solid for me.