Tinc benchmark

231 views
Skip to first unread message

Pablo Piaggio

unread,
Apr 1, 2016, 1:27:51 AM4/1/16
to homefro...@googlegroups.com

Tinc LAN benchmarks

Context
In our last 'HFR Hack Along' meeting (March 19, 2016), we face a couple of network performance issues. One related to the Banana Pi itself, and the another to Tinc VPN.

The purpose of this benchmark is to measure in detail how much Tinc affects the network traffic. In order to focus only in the VPN traffic, I took the Banana Pi out of the equation (for these first tests), and test only using regular PCs on a gigabit LAN.

Hardware used
(Note: the processor used is relevant as I explain later on)

Laptop: Sony VAIO
  • processor: i3 processor
  • RAM: 4Gb
  • NIC: Gigabit
  • Running a Live USB with Ubuntu-Mate 15.10
  • LAN IP: 192.168.2.1
  • TINC VPN IP: 10.0.0.1
Server: Dell Dimension E520
  • processor: Pentium D (2 cores)
  • RAM: 4Gb
  • NIC: Gigabit
  • Running Ubuntu Server 14.04
  • LAN IP: 192.168.2.1
  • TINC VPN IP: 10.0.0.2
Switch: D-Link DGS-1005G
  • managed: unmanaged
  • Layer: Layer 3 Light
  • Speeds: 2000 Mbps (full duplex)
  • Ports: 5 gigabit ports
Cables: assortment of old cables.
  • Cat5: no, maybe?
  • Cat5_e: nop_e
  • Cat_6: sorry what's that?
  • dog licked and chewed: I can't deny nor confirm that.

Tinc software setup:
  • Tinc version 1.0.23
  • All defaults. Similar to post 'Tinc basic setup'.
  • The VPN tunnel is established trying UDP first, and falling back to TCP if not possible.
Things to measure:
  • Pings from each machine to the other.
  • Raw network transfers:
    • iperf over TCP
    • iperf over UDP
    • netcat pushed by dd
  • Encrypted transfers:
    • rsync using default encryption algorithm (AES aes128-ctr)
    • rsync using light encryption algorithm (Arcfour)
(specific commands described at the end).


Executive summary: tl;dr

Tinc reduces the transfer speeds about an average of 85-90%.

As far as I can tell, this is the result of heavy use of encryption. The default cipher method used is blowfish. Nevertheless, we can set all ciphers available in OpenSSL (read here). I theory then, using a less CPU intensive algorithm would result in faster network transfers.

It seems some newer processor have what is called 'AES instruction set' which provides hardware acceleration of encryption and decryption using the 'Advanced Encryption Standard'. Note that both machines used here do not support it.

Apparently, the encryption implementation is much efficient in the newer Tinc version 1.1. Not only is more efficient but also take advantage of the AES instruction set when available (read here).

The instruction set is also available on the ARM architecture. Broadcom lists 3 processors that support the set. Unfortunately, the Raspberry/Banana Pi chip (BCM2837) is not in that list (read here).


Benchmark details

Ping

Tinc increases more than double and a half the average ping time (+150%).


iperf over TCP

Tinc decreases the transfer speed by more than 90%.


iperf over UDP

Tinc decreases the transfer speed by an average of almost 90%.


dd | netcat

Tinc decreases the transfer speed by an average of almost 90%.


rsync default encryption

Tinc decreases the transfer speed by an average of almost 85%.


rsync light encryption

Tinc decreases the transfer speed by an average of almost 90%.


Data

Dada combined (from 2 tables below)



Direct Tinc %var
ping laptop pings server msec 0.38 0.99 -159.74

server pings laptop msec 0.29 0.85 -190.75
iperf tcp upload Mbits/sec 834.67 70.37 91.57

download Mbits/sec 853.33 75.87 91.11
iperf udp upload Mbits/sec 506.00 57.07 88.72

download Mbits/sec 771.33 57.10 92.60
dd/nc upload MB/sec 109.33 8.97 91.80

download MB/sec 86.80 9.53 89.02
rsync default upload MB/sec 58.28 8.81 84.89

download MB/sec 63.32 8.98 85.82
rsync light upload MB/sec 98.26 9.29 90.55

download MB/sec 85.60 8.80 89.72



Direct transfers




Test 1 Test 2 Test 3 Average
ping ping l->s msec N/A N/A N/A 0.380

ping s->l msec N/A N/A N/A 0.292
iperf tcp upload Mbits/sec 830 838 836 835

download Mbits/sec 862 875 823 853
iperf udp upload Mbits/sec 495 500 523 506

download Mbits/sec 769 777 768 771
dd/nc upload MB/sec 110.00 108.00 110.00 109.33

download MB/sec 83.70 91.00 85.70 86.80
rsync default upload MB/sec 62.89 56.39 55.57 58.28

download MB/sec 64.72 61.56 63.67 63.32
rsync light upload MB/sec 96.68 99.09 99.02 98.26

download MB/sec 82.81 89.23 84.76 85.60


Tinc Transfers



Test 1 Test 2 Test 3 Average
ping ping l->s msec N/A N/A N/A 0.987

ping s->l msec N/A N/A N/A 0.849
iperf tcp upload Mbits/sec 70.5 70.1 70.5 70

download Mbits/sec 76 76 75.6 76
iperf udp upload Mbits/sec 57.1 57 57.1 57

download Mbits/sec 57.1 57.1 57.1 57
dd/nc upload MB/sec 8.80 8.60 9.50 8.97

download MB/sec 9.50 9.50 9.60 9.53
rsync default upload MB/sec 8.71 8.81 8.90 8.81

download MB/sec 8.98 8.98 8.98 8.98
rsync light upload MB/sec 8.79 9.15 9.92 9.29

download MB/sec 8.83 8.74 8.82 8.80


Commands

Direct:
SERVER=192.168.2.1
LAPTOP=192.168.2.2

Tinc:
SERVER=10.0.0.1
LAPTOP=10.0.0.1

iperf over TCP:
  • upload
    • server:
      iperf -s
    • laptop:
      iperf -c "$SERVER"
  • download
    • server:
      iperf -c "$LAPTOP"
    • laptop:
      iperf -s

iperf over UDP
Note: default iperf UDP bandwidth is 1 Mbit/s. I increased the bandwidth up to right before datagrams started to be lost.
  • upload
    • server:
      iperf -u -s
    • laptop:
      iperf -u -c "$SERVER" -b 1600M
  • download
    • server:
      iperf -u -c "$LAPTOP" -b 1600M
    • laptop:
      iperf -u -s

netcat pushed by dd
Note: 'LAS.s2016.e410.mp4' is a 307M file use to transfer data.
  • upload
    • server:
      nc -vvlnp 12345 >/dev/null
    • laptop:
      dd if=tmp/LAS.s2016.e410.mp4 bs=1M count=1K | nc -vvn "$SERVER" 12345
  • download
    • server:
      dd if=tmp/LAS.s2016.e410.mp4 bs=1M count=1K | nc -vvn "$SERVER" 12345
    • laptop:
      nc -vvlnp 12345 >/dev/null

rsync using default encryption algorithm (AES aes128-ctr)
  • upload
    • laptop:
      rsync -avP tmp/LAS.s2016.e410.mp4  "$SERVER":
  • download
    • laptop:
      rsync -avP "$SERVER":tmp/LAS.s2016.e410.mp4 .

rsync using light encryption algorithm (Arcfour)
  • upload
    • laptop:
      rsync -avP -e "ssh -c arc -o Compression=off' tmp/LAS.s2016.e410.mp4  "$SERVER":
  • download
    • laptop:
      rsync -avP -e "ssh -c arc -o Compression=off' "$SERVER":tmp/LAS.s2016.e410.mp4 .

Auto Generated Inline Image 1
Auto Generated Inline Image 2
Auto Generated Inline Image 3
Auto Generated Inline Image 4
Auto Generated Inline Image 5
Auto Generated Inline Image 6
Reply all
Reply to author
Forward
0 new messages