Scylla redis api performance

535 views
Skip to first unread message

Nfn Nln

<nfn.nln@protonmail.com>
unread,
Feb 18, 2020, 1:48:00 PM2/18/20
to scylladb-users@googlegroups.com
Hi,

In short, below are redis-benchmark performance numbers on a single core:

scylladb, no cache: 17K TPS
scyllabd, all in cache: 30K TPS
ubuntu redis-server: 1500K TPS


In details:

I am evaluating scylla/seastar storage stack performance, I use Intel P3700 NVMe SSD.
So I run redis api of scylla-3.3 with cache disabled, on a single core.
I am getting pretty low numbers---redis-benchmark reports 17K TPS, GETs of 512B.
The interesting part, for some reason, scylla issues 34K IOPS, according to iostat.

Any idea why does scylladb issue double number of IOPS
comparing to the number of redis transactions?


Running fio with libaio engine and O_DIRECT flag on a single core
I get more than 300K IOPS, so disk is obviously not a bottleneck,
the core that scylladb runs on, is 100% utilized.
Also, if I run scylladb with cache enabled I get up to 30K TPS,
everything in memory, nothing goes to disk.
So I think it is not the seastar framework that bottlenecks redis
transactions. Just for comparison, standard single threaded in-memory
redis-server reaches up to 1500K TPS on a single core.

It looks like redis api of scylladb is very cpu hungry.
Am I wrong or I am just using a wrong configuration?

Any advise would be very appreciated.

Thanks

P.S. Below are the scylla and redis-benchmark command line options I used.

redis-benchmark -h 127.0.0.1 -p 1234 -t get -n 10000000 -d 512 -P 128

sudo scylla --options-file /etc/scylla/scylla.yaml   \
        --io-properties /etc/scylla.d/io_properties.yaml           \
        --log-to-syslog 1 --log-to-stdout 1 --default-log-level error   \
        --network-stack posix  --memory 16G --cpuset 1 --smp 1 \
        --num-io-queues 1 --max-io-requests 1024   \
        --developer-mode 1 --enable-cache 0  --redis-port 1234 \
        --unsafe-bypass-fsync 1

$ cat /etc/scylla.d/io_properties.yaml
disks:
  - mountpoint: /var/lib/scylla
    read_iops: 509897
    read_bandwidth: 2800496384
    write_iops: 262386
    write_bandwidth: 1134208256

$ scylla --version
3.3.rc1-0.20200209.0d0c1d43188

$ /usr/bin/redis-server -v
Redis server v=3.0.6 sha=00000000:0 malloc=jemalloc-3.6.0 bits=64 build=7785291a3d2152db


Dor Laor

<dor@scylladb.com>
unread,
Feb 18, 2020, 3:25:04 PM2/18/20
to ScyllaDB users
On Tue, Feb 18, 2020 at 10:48 AM 'Nfn Nln' via ScyllaDB users
<scyllad...@googlegroups.com> wrote:
>
> Hi,
>
> In short, below are redis-benchmark performance numbers on a single core:
>
> scylladb, no cache: 17K TPS
> scyllabd, all in cache: 30K TPS
> ubuntu redis-server: 1500K TPS

Quick root cause - you use Redis pipelining of 128 commands, it's not supported
by the Scylla-redis and is half way in development. For a fair
comparison in this
mode, drop the -P 128. Pipeline support is being developed by Jian.

Besides this Scylla as a persistent database is doing other activities
than Redis. In general,
we don't expect to beat Redis with the same operation. Instead, Scylla offers:
- all core utilization
- Much better HA
- Easy manageability
- Persistency of the data, no ugly stalls due to forks.
- In-SSD vs in-memory

Please also sent the grafana monitor data later.

>
>
> In details:
>
> I am evaluating scylla/seastar storage stack performance, I use Intel P3700 NVMe SSD.
> So I run redis api of scylla-3.3 with cache disabled, on a single core.
> I am getting pretty low numbers---redis-benchmark reports 17K TPS, GETs of 512B.
> The interesting part, for some reason, scylla issues 34K IOPS, according to iostat.
>
> Any idea why does scylladb issue double number of IOPS

It may depend on how the data is stored in sstables. You can discover
which files are being accessed and we also have virtual tables for the data.
You can run a major compaction to see if it makes a difference too.

> comparing to the number of redis transactions?
>
>
> Running fio with libaio engine and O_DIRECT flag on a single core
> I get more than 300K IOPS, so disk is obviously not a bottleneck,
> the core that scylladb runs on, is 100% utilized.
> Also, if I run scylladb with cache enabled I get up to 30K TPS,
> everything in memory, nothing goes to disk.
> So I think it is not the seastar framework that bottlenecks redis
> transactions. Just for comparison, standard single threaded in-memory
> redis-server reaches up to 1500K TPS on a single core.
>
> It looks like redis api of scylladb is very cpu hungry.

Correct, we like to utilize all of the cores.

> Am I wrong or I am just using a wrong configuration?
>
> Any advise would be very appreciated.
>
> Thanks
>
> P.S. Below are the scylla and redis-benchmark command line options I used.
>
> redis-benchmark -h 127.0.0.1 -p 1234 -t get -n 10000000 -d 512 -P 128
>
> sudo scylla --options-file /etc/scylla/scylla.yaml \
> --io-properties /etc/scylla.d/io_properties.yaml \
> --log-to-syslog 1 --log-to-stdout 1 --default-log-level error \
> --network-stack posix --memory 16G --cpuset 1 --smp 1 \
> --num-io-queues 1 --max-io-requests 1024 \
> --developer-mode 1 --enable-cache 0 --redis-port 1234 \
> --unsafe-bypass-fsync 1
>
> $ cat /etc/scylla.d/io_properties.yaml
> disks:
> - mountpoint: /var/lib/scylla
> read_iops: 509897
> read_bandwidth: 2800496384
> write_iops: 262386
> write_bandwidth: 1134208256
>
> $ scylla --version
> 3.3.rc1-0.20200209.0d0c1d43188
>
> $ /usr/bin/redis-server -v
> Redis server v=3.0.6 sha=00000000:0 malloc=jemalloc-3.6.0 bits=64 build=7785291a3d2152db
>
>
> --
> You received this message because you are subscribed to the Google Groups "ScyllaDB users" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to scylladb-user...@googlegroups.com.
> To view this discussion on the web visit https://groups.google.com/d/msgid/scylladb-users/sQ2YdT6H31ibVN6FMx9t7JkjOA6ot_pab546oNn9fGLXfQ1PJspWiZ2E7oghvFmLGVAIAS7E6t_UcLqedjLvVKaWvBBZ3Z2arYxt1lblNSU%3D%40protonmail.com.

Nfn Nln

<nfn.nln@protonmail.com>
unread,
Feb 18, 2020, 4:10:14 PM2/18/20
to scylladb-users@googlegroups.com, dor@scylladb.com
Hi Dor,

Thanks for the prompt reply.

Without pipelining redis-server goes down to 80K TPS which still x2.6 of scylladb.


scylladb, no cache: 17K TPS
scyllabd, all in cache: 30K TPS
ubuntu redis-server: 1500K TPS
ubuntu redis-server, no ppl: 80K TPS

Any reference how can I extract the "grafana monitor data"?

Thanks





‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐
> To view this discussion on the web visit https://groups.google.com/d/msgid/scylladb-users/CAKUaUn7HJRVGY%2BQPgDykb2v1eOw9atcTo_OEEruiK464v3aWfg%40mail.gmail.com.


Dor Laor

<dor@scylladb.com>
unread,
Feb 18, 2020, 4:20:55 PM2/18/20
to Nfn Nln, scylladb-users@googlegroups.com
On Tue, Feb 18, 2020 at 1:10 PM Nfn Nln <nfn...@protonmail.com> wrote:
>
> Hi Dor,
>
> Thanks for the prompt reply.
>
> Without pipelining redis-server goes down to 80K TPS which still x2.6 of scylladb.
>
>
> scylladb, no cache: 17K TPS
> scyllabd, all in cache: 30K TPS
> ubuntu redis-server: 1500K TPS
> ubuntu redis-server, no ppl: 80K TPS
>
> Any reference how can I extract the "grafana monitor data"?

https://docs.scylladb.com/operating-scylla/monitoring/

In general, there will always be a big difference if you compare core
to core but I am interested
in the breakdown as well. For fun, try running with all of the cores too.

Another thing that supposed to work for Scylla is the memory
allocator, try use a variable
size of objects and to fill the memory so it will be hard to evacuate
and see what happens.
Also, run a mix of IO with more writes for a long time

Nfn Nln

<nfn.nln@protonmail.com>
unread,
Feb 21, 2020, 12:54:40 AM2/21/20
to Dor Laor, scylladb-users@googlegroups.com
I am aware of issue #5364
https://github.com/scylladb/scylla/issues/5364

Just for evaluation, if I don't care about correctness and I use only GET commands,
can I run the scylla redis with basic pipeline enabled?


‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐
> > > > ----------------------------------------------------------------------------------------------------------------------------------------------------------------------
Reply all
Reply to author
Forward
0 new messages