Server tasting -- testing virtual Linux servers to see how well they will perform

52 views
Skip to first unread message

Mike O'Connor

unread,
Sep 14, 2022, 8:04:24 PM9/14/22
to Jacktrip-users
hi all,

i've figured out Something Helpful and thought i'd share it with you.  here's a link to the web page if that's a more convenient way to read this kind of thing.


i'd love to hear thoughts from others.

mike o'c

Server tasting

Here’s a page about how to test the disk read and write speeds of a Linux server.  I’ve been looking for an easy way to evaluate/compare how virtual servers will hold up under load.  Since Jacktrip is all about moving audio data around, I’ve landed on testing disk performance as a good way to test all the parts of the server that are going to be used by Jacktrip (CPU, i/o busses, etc) to do that.

This *doesn’t* test network performance, but we’ve got other tools for that job.  I’ve been finding that pure network-performance tests don’t really help me choose servers because these days all servers run on networks that are vastly faster than anything we’ll ever need.  I’m much more interested in how well the servers move data around once they’ve got it.  There’s a *lot* of variability there.

I titled this web page “Server Tasting” because that’s how i do it.  I build a gaggle of servers and conduct a little tasting session that compares the results of these two command-line tests (thanks to Linode-support for these).  As soon as I find a tasty one in a location that will work for the session, I stop building new ones and delete the rejects.  I can taste a lot of servers for not much money this way and the results have been Pretty Good.

The Two Tests

I got these from Linode Support — who have always been there when I needed help.  Thanks folks!


This “dd” command tests how fast the disk can create and then read files some great big disk files.  This version creates a pretty big file set 4000 4M (mByte) files or 16 gBytes which can fill the disk a 25 gig Nanode if it’s got other stuff on the disk.  Consider reducing those numbers if the disk isn’t empty.

dd if=/dev/zero of=test_file bs=4M count=4000

The result should be at least 3 mbytes/sec per stereo channel of audio.  My speedy Nanode (smallest/cheapest Linode) is returning 1.2 gBytes/second just now.  Darn nifty, lots faster than what I need.

dd if=/dev/zero of=test_file bs=4M count=4000
4000+0 records in
4000+0 records out
16777216000 bytes (17 GB, 16 GiB) copied, 13.7779 s, 1.2 GB/s


This “fio” command line runs a similar test that produces more information and stresses the server a little more.  You may need to add it to the server – use “apt install fio” to do that.

fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test --filename=random_read_write.fio --bs=4k --iodepth=64 --size=500M --readwrite=randrw --rwmixread=75

Here are the results, on Speedy Nanode, showing lower throughput (reading 373 mbytes/sec and writing 124 mbytes/second).  Still, dividing 124 by 3 predicts a theoretical capacity of writing around 40 channels of audio.  That equates to a 20 person stereo session right at the ragged edge, or comfortably keeping up with a 15-person stereo session.

fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test --filename=random_read_write.fio --bs=4k --iodepth=64 --size=500M --readwrite=randrw --rwmixread=75
test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=64
fio-3.25
Starting 1 process
test: Laying out IO file (1 file / 500MiB)
Jobs: 1 (f=1)
test: (groupid=0, jobs=1): err= 0: pid=460525: Mon Sep 5 16:29:40 2022
read: IOPS=95.5k, BW=373MiB/s (391MB/s)(375MiB/1005msec)
bw ( KiB/s): min=362784, max=401816, per=100.00%, avg=382300.00, stdev=27599.79, samples=2
iops : min=90696, max=100454, avg=95575.00, stdev=6899.95, samples=2
write: IOPS=31.9k, BW=124MiB/s (130MB/s)(125MiB/1005msec); 0 zone resets
bw ( KiB/s): min=120952, max=134176, per=100.00%, avg=127564.00, stdev=9350.78, samples=2
iops : min=30238, max=33544, avg=31891.00, stdev=2337.70, samples=2
cpu : usr=6.47%, sys=45.92%, ctx=7617, majf=0, minf=8
IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
issued rwts: total=95984,32016,0,0 short=0,0,0,0 dropped=0,0,0,0
latency : target=0, window=0, percentile=100.00%, depth=64

Run status group 0 (all jobs):
READ: bw=373MiB/s (391MB/s), 373MiB/s-373MiB/s (391MB/s-391MB/s), io=375MiB (393MB), run=1005-1005msec
WRITE: bw=124MiB/s (130MB/s), 124MiB/s-124MiB/s (130MB/s-130MB/s), io=125MiB (131MB), run=1005-1005msec

Disk stats (read/write):
sda: ios=74188/24613, merge=0/0, ticks=33804/9000, in_queue=42804, util=90.34%


Results from a recent tasting:

I was preparing a server to host a public 7-person performance that was also recorded full multi-track.

I built five servers of two types (virtual and dedicated CPUs) in two locations  (California and New Jersey).  I had a duplicate sneak in because I wasn’t paying attention to what I was doing.  The Nanode rounds out the bottom of the table.

The “dd” test results are the most startling in their difference, ranging over 10 times the speed.  The more-complex “fio” test doesn’t show as much difference.  But the “dd” and “fio” tests agree on which server is fastest.  I chose the highlighted East-location dedicated-CPU server and was happy with the performance.

But also note how well the tiny little last-row Speedy Nanode stacks up.  That rascal is a terrific server — and can run all month for five dollars.

location type dd test fio test
elapsed sec MB/s read MB/s write MB/s
west dedicated 8 1 170 98 225 75
west dedicated8 2 429 39 207 69
west virtual8 125 134 124 41
east dedicated8 12
1400
236
78
east virtual8 208 80 112 37
east NANODE 14
1200
393
131
 

Matt Keys

unread,
Jun 3, 2023, 8:13:26 AM6/3/23
to jacktrip-users
dd tests should probably be 2x the memory size to eliminate any possibility of caching, and also you're missing a move of the test file to /dev/null to test read throughput. Example script .. 

#!/bin/sh
iam=$(whoami)
echo "Test disk IO throughput using DD."
echo ""
echo "Enter the full path to the destination file."
echo "Example: /home/$iam/test.file"
read d

echo ""
echo "Enter the block size to test with."
echo "Example: 4K"
read b

ramsize=$(free -m | sed -n '2p' | awk '{ print $2 }')
echo ""
echo "Enter the number of times to loop."
echo "The resulting size should be 2x greater than $ramsize MB."
echo "Using 4K blocks 1M = 4GB, 2M = 8GB, 4M = 16GB, 8M = 32GB"
read c

echo ""
echo "Testing write to $d using $b block size $c times."
time sh -c "dd if=/dev/zero of=$d bs=$b count=$c && sync"

echo ""
echo "Testing read from $d to /dev/null using $b block size."
time sh -c "dd if=$d of=/dev/null bs=$b"

# clean up
rm -f $d
sync

Reply all
Reply to author
Forward
0 new messages