--
You received this message because you are subscribed to the Google Groups "ScyllaDB users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to scylladb-users+unsubscribe@googlegroups.com.
To post to this group, send email to scylladb-users@googlegroups.com.
Visit this group at https://groups.google.com/group/scylladb-users.
To view this discussion on the web visit https://groups.google.com/d/msgid/scylladb-users/cee42dae-b0d2-40f7-850a-c2a6c27d85d9%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
--
You received this message because you are subscribed to the Google Groups "ScyllaDB users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to scylladb-user...@googlegroups.com.
To post to this group, send email to scyllad...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/scylladb-users/9f3cec22-bfee-4aba-b168-4cc48bf14393%40googlegroups.com.
There is no such limit, and I've seen Scylla read and write at multiple GB per second.
In your case, it's probably limited on IOPS, not bandwidth. During startup it probably issues large sequential reads (commitlog replay), but here, it's doing small random reads. Note that avgrq-sz=13 (so 7 kB/request), and the queue size is around 20.
(in fact, dividing read bandwidth by r/s, we see about 5 kB/read).
You can run diskplorer [1] when the system is idle to see what the disk is capable of. It's probably not far from being maxed out, and at the peak, over that.
From the counters I note:
- hit rate about 50%, does that make sense for your read load?
- io-queue-*/delay-* reaching 300 ms, indicating disk overload
- not doing any writes; if that's the steady state, and not already using Leveled compaction strategy, consider it
- seeing some data reads - external nodes (instead of digest reads); are you not using prepared statements for some reads?
To view this discussion on the web visit https://groups.google.com/d/msgid/scylladb-users/d0a4d5d5-34d3-462a-bdbb-5f639943782d%40googlegroups.com.
Hi,
The data might not be in cache because it is warming up, or because the extra memory overhead required to keep it in cache is pushing some partitions out.
You can examine the various cache metrics such as number of partitions, hit rate, miss rate, and evictions.
You are right about the exception, we should swallow it and
convert it to a metric (with an alert attached) instead.
To view this discussion on the web visit https://groups.google.com/d/msgid/scylladb-users/cb1e1cd2-8b32-4bba-8181-457a71aed3dc%40googlegroups.com.