Hectic resource utilisation correlated with time

62 views
Skip to first unread message

Marcin

<marcin.wojno.mail@gmail.com>
unread,
Sep 30, 2021, 9:32:22 AM9/30/21
to ScyllaDB users

Hello,

We have Scylla setup with 6-node i3 instances and use one table (RF=3, default compaction). Reads, during 24h window, oscillate between 11 and 18 kops/s per node. Once a day upload (takes a few hours) is executed (token aware driver, a few billion entries, most are overwrites). TTL is set to a few days (gc_grace default).

Recently, every day we experience a strange pattern during the data upload. No matter what time it starts, at the beginning there are hectic changes is most of the metrics (cpu, disk read/write mb/s, compactions, task processed along with storage proxy write timeouts). Latency hurts. After some time it "calms down", both read/write latencies are back to normal and metrics seem to be steady. The most interesting/disturbing part is that seem to be correlated with time. E.g. one day upload started at 12:00 UTC and calmed down ~12:12. The next day it started 20 minutes earlier and calmed down ~12:15. The other day it started at 10:40 and calmed down similarily ~12:15. In general, out of last few days, it started at different times (+/- 2h) and almost every time calms down at the same moment - ~12:10. Only once the pattern didn't repeat itself but it got moved backwards exactly 2h. 

We don't notice any correlation with compactions or reads (change is smooth enough to doubt it may have any effect). We don't have any process that could have any impact during this hours. Currently we use single-row partitions and that will be improved soon but similar setup is much longer and no such pattern was noticed before. 

Do you have any thoughts about the reason for such strange pattern? Could it be some process in the db we are not aware of? And ultimately do you have any ideas how could we improve the stability of Scylla during the upload?

Thanks,
Marcin

Avi Kivity

<avi@scylladb.com>
unread,
Sep 30, 2021, 10:43:22 AM9/30/21
to scylladb-users@googlegroups.com, Marcin

Can you share metric snapshots that show the problem? From the main and advanced dashboards.


One thing to check is whether fstrim.service was run by systemd at that time.

--
You received this message because you are subscribed to the Google Groups "ScyllaDB users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to scylladb-user...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/scylladb-users/fec70669-7b2b-4c0a-b59d-0463e00b9381n%40googlegroups.com.

Marcin

<marcin.wojno.mail@gmail.com>
unread,
Sep 30, 2021, 4:35:35 PM9/30/21
to ScyllaDB users
I checked (journalctl --since "2021-09-24 11:30:00" | grep -i fstrim) and the last time fstrim was invoked is from Sep 25th.

Attaching a few screens representing behaviour (from one of the instances) during the beginning of the upload (it finished a few hours after it got stabilized). Screens are in CET where everything calmed down around 14:10.
cpu.png
cql.png
detailed-replica.png
io.png
cpp exceptions.png
detailed-coordinator.png

Avi Kivity

<avi@scylladb.com>
unread,
Oct 3, 2021, 4:24:44 AM10/3/21
to scylladb-users@googlegroups.com, Marcin

Everything looks good, but the most important chart is missing - Query I/O Queue details.

Marcin

<marcin.wojno.mail@gmail.com>
unread,
Oct 4, 2021, 3:43:50 AM10/4/21
to ScyllaDB users
What do you mean it looks good? Is that expected?

Attaching Query I/O Queue (and more just in case it turns out informative) from the same time window as previously.
Screenshot 2021-10-04 at 09.42.31.png

Avi Kivity

<avi@scylladb.com>
unread,
Oct 5, 2021, 7:23:58 AM10/5/21
to scylladb-users@googlegroups.com, Marcin

I meant that there is no trace of anything bad there. Although I overlooked the C++ exceptions spike, and the timeouts. So it doesn't all look good.


I/O queue delays show spikes of around 10ms. Are the timeout settings lower than that?


I see you are issuing batches, are those logged or unlogged batches?


Anything interesting in the logs?

Marcin

<marcin.wojno.mail@gmail.com>
unread,
Oct 5, 2021, 7:58:14 AM10/5/21
to ScyllaDB users
Batches are unlogged (java driver used in version 3.10.1).

Read/write timeouts on Scylla side are default (read 5s, write 2s). Client reading times out at 100 ms. I've noticed it's not recommended, will fix that by lowering scylla side timeout soon, but doubt it may have such influence.

Regarding the logs (journalctl), I can see two types of messages. One is about compactions and the other is warning regarding the batch size - warning threshold is exceeded but not to make batches fail. As I wrote initially those batches contain multiple partitions since we currently use single-row partitions.

I can't see anything in the logs which could indicate the reason for Cpp exceptions spike. Is there any better place to look for issues?

Avi Kivity

<avi@scylladb.com>
unread,
Oct 5, 2021, 9:39:53 AM10/5/21
to scylladb-users@googlegroups.com, Marcin

Did you elevate the batch warning/fail thresholds? If so, it's probably large batches causing allocation failures and stressing the allocator. Do you see warnings about reactor stalls in the logs? What are your batch sizes?


Although, we should have seen task quota violations in the advanced dashboards, and we didn't.

Marcin

<marcin.wojno.mail@gmail.com>
unread,
Oct 5, 2021, 10:11:41 AM10/5/21
to ScyllaDB users
No, batch thresholds are defaults as well:
batch_size_fail_threshold_in_kb: 50
batch_size_warn_threshold_in_kb: 5
Batch size is 8 rows (or less, batch is divided on the client side to split if size is too big which is defined as ~40kb). Rows are mostly (99%) below 4kb and max is 42kb (taken from tablehistogram).
I've grepped logs for last week and don't see anything about reactor stalls.

Avi Kivity

<avi@scylladb.com>
unread,
Oct 5, 2021, 10:21:54 AM10/5/21
to scylladb-users@googlegroups.com, Marcin

Very strange. I saw that the streaming group had activity. Were you running repair or adding a node at the time?


Note that this doesn't explain the problem (since there appear to be enough bandwidth left over), I'm collecting information.

Gleb Natapov

<gleb@scylladb.com>
unread,
Oct 5, 2021, 10:24:35 AM10/5/21
to scylladb-users@googlegroups.com, Marcin
On Tue, Oct 05, 2021 at 05:21:49PM +0300, Avi Kivity wrote:
> Very strange. I saw that the streaming group had activity. Were you running
> repair or adding a node at the time?
>
IIRC hints use it as well.
> > > > > > <https://groups.google.com/d/msgid/scylladb-users/fec70669-7b2b-4c0a-b59d-0463e00b9381n%40googlegroups.com?utm_medium=email&utm_source=footer>.
> > > > >
> > > > > -- You received this message because
> > > > > you are subscribed to
> > > > > the Google Groups "ScyllaDB users" group.
> > > > > To unsubscribe from this group and stop receiving
> > > > > emails from it, send an email to
> > > > > scylladb-user...@googlegroups.com.
> > > > > To view this discussion on the web visit
> > > > > https://groups.google.com/d/msgid/scylladb-users/26ea1842-c9ad-4695-8521-d034fa0c8f68n%40googlegroups.com
> > > > > <https://groups.google.com/d/msgid/scylladb-users/26ea1842-c9ad-4695-8521-d034fa0c8f68n%40googlegroups.com?utm_medium=email&utm_source=footer>.
> > > >
> > > > -- You received this message because you are
> > > > subscribed to the
> > > > Google Groups "ScyllaDB users" group.
> > > > To unsubscribe from this group and stop receiving emails
> > > > from it, send an email to scylladb-user...@googlegroups.com.
> > > > To view this discussion on the web visit
> > > > https://groups.google.com/d/msgid/scylladb-users/ea05fbea-69a8-4f62-958c-8d0bec4e4351n%40googlegroups.com
> > > > <https://groups.google.com/d/msgid/scylladb-users/ea05fbea-69a8-4f62-958c-8d0bec4e4351n%40googlegroups.com?utm_medium=email&utm_source=footer>.
> > >
> > > -- You received this message because you are subscribed to
> > > the
> > > Google Groups "ScyllaDB users" group.
> > > To unsubscribe from this group and stop receiving emails from it,
> > > send an email to scylladb-user...@googlegroups.com.
> > > To view this discussion on the web visit
> > > https://groups.google.com/d/msgid/scylladb-users/adbaeb5a-d337-4e7d-bbc6-352d87d62b02n%40googlegroups.com
> > > <https://groups.google.com/d/msgid/scylladb-users/adbaeb5a-d337-4e7d-bbc6-352d87d62b02n%40googlegroups.com?utm_medium=email&utm_source=footer>.
> >
> > --
> > You received this message because you are subscribed to the Google
> > Groups "ScyllaDB users" group.
> > To unsubscribe from this group and stop receiving emails from it, send
> > an email to scylladb-user...@googlegroups.com.
> > To view this discussion on the web visit https://groups.google.com/d/msgid/scylladb-users/701a521c-9332-44c6-b540-52d4f56d8e64n%40googlegroups.com <https://groups.google.com/d/msgid/scylladb-users/701a521c-9332-44c6-b540-52d4f56d8e64n%40googlegroups.com?utm_medium=email&utm_source=footer>.
>
> --
> You received this message because you are subscribed to the Google Groups "ScyllaDB users" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to scylladb-user...@googlegroups.com.
> To view this discussion on the web visit https://groups.google.com/d/msgid/scylladb-users/3280004c-0085-495c-4c93-3bc14de54345%40scylladb.com.

--
Gleb.

Marcin

<marcin.wojno.mail@gmail.com>
unread,
Oct 5, 2021, 10:29:06 AM10/5/21
to ScyllaDB users
No, nothing like adding a node/repair was executed on this db recently.

Avi Kivity

<avi@scylladb.com>
unread,
Oct 5, 2021, 10:32:35 AM10/5/21
to scylladb-users@googlegroups.com, Gleb Natapov, Marcin

On 05/10/2021 17.24, Gleb Natapov wrote:
> On Tue, Oct 05, 2021 at 05:21:49PM +0300, Avi Kivity wrote:
>> Very strange. I saw that the streaming group had activity. Were you running
>> repair or adding a node at the time?
>>
> IIRC hints use it as well.
>

Aha. It could well be hints, I see 100k hints/sec. Do you have a node down?

Marcin

<marcin.wojno.mail@gmail.com>
unread,
Oct 6, 2021, 10:42:56 AM10/6/21
to ScyllaDB users
Didn't find anything that could indicate that any of the nodes were down during that time. Ocasionally metrics were not reported for a few seconds and pings took longer then usual but everything seems fine apart from that.

Marcin

<marcin.wojno.mail@gmail.com>
unread,
Oct 13, 2021, 9:06:42 AM10/13/21
to ScyllaDB users
Hey.. do you have any ideas how to approach the issue? By the issue I mean that while writes/reads are +/- similar, initially db behaves unstable and suddenly (after certain time) calms down and works just better.

I decreased the upload speed so that the pressure on the nodes is lower but it takes more time. And issues are observable but not that correlated with time. E.g. recently after a few hours of the pretty stable upload it started to behave weird for 2h (until the upload finished) - latency and network transmit had hictic spikes. And also number of Cpp exceptions increased. Checked the system logs and don't see anything of interest there... This is a bit different then previous issues but the pattern that out of the blue behaviour of the db changes seems clear. Attaching screen shots.
cql.png
network transmit spikes.png
io.png
storage proxy write timeouts.png
cpu.png
latencies.png
cpp exceptions.png
detailed-coordinator.png

Avi Kivity

<avi@scylladb.com>
unread,
Oct 13, 2021, 9:11:29 AM10/13/21
to scylladb-users@googlegroups.com, Marcin

Please switch to shard view to see if there's a shard that behaves differently. Sometimes the aggregate view can look normal when a single shard is misbehaving.

On 13/10/2021 16.06, Marcin wrote:
Hey.. do you have any ideas how to approach the issue? By the issue I mean that while writes/reads are +/- similar, initially db behaves unstable and suddenly (after certain time) calms down and works just better.

I decreased the upload speed so that the pressure on the nodes is lower but it takes more time. And issues are observable but not that correlated with time. E.g. recently after a few hours of the pretty stable upload it started to behave weird for 2h (until the upload finished) - latency and network transmit had hictic spikes. And also number of Cpp exceptions increased. Checked the system logs and don't see anything of interest there... This is a bit different then previous issues but the pattern that out of the blue behaviour of the db changes seems clear. Attaching screen shots.
--
You received this message because you are subscribed to the Google Groups "ScyllaDB users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to scylladb-user...@googlegroups.com.

Marcin

<marcin.wojno.mail@gmail.com>
unread,
Oct 13, 2021, 10:01:32 AM10/13/21
to ScyllaDB users
Interesting note. Unfortunately I manually picked different shards (we use metal hence have 70 shards per node) for each of the dashboards and didn't see anything that could indicate any paticular one to behave worse then others - only shifted (in time).

Moreno Garcia

<moreno@scylladb.com>
unread,
Oct 13, 2021, 10:19:54 AM10/13/21
to scylladb-users@googlegroups.com
Can you share the compaction and memory panels for the same timeframe?

On Wed, Oct 13, 2021 at 10:06 AM Marcin <marcin.w...@gmail.com> wrote:
Hey.. do you have any ideas how to approach the issue? By the issue I mean that while writes/reads are +/- similar, initially db behaves unstable and suddenly (after certain time) calms down and works just better.

I decreased the upload speed so that the pressure on the nodes is lower but it takes more time. And issues are observable but not that correlated with time. E.g. recently after a few hours of the pretty stable upload it started to behave weird for 2h (until the upload finished) - latency and network transmit had hictic spikes. And also number of Cpp exceptions increased. Checked the system logs and don't see anything of interest there... This is a bit different then previous issues but the pattern that out of the blue behaviour of the db changes seems clear. Attaching screen shots.

--
You received this message because you are subscribed to the Google Groups "ScyllaDB users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to scylladb-user...@googlegroups.com.


--
--
Moreno Garcia
Head of Solutions Architects
https://calendly.com/moreno-scylla
Check our Scylla Summit 2019 Presentations on Youtube

Marcin

<marcin.wojno.mail@gmail.com>
unread,
Oct 13, 2021, 11:20:31 AM10/13/21
to ScyllaDB users
Sure. Attaching for the same node as the graphs from my last batch of screen shots.Time is shfted 2h because these dashbords use another timezone.
Screenshot 2021-10-13 at 17.16.16.png
Screenshot 2021-10-13 at 17.15.27.png
Screenshot 2021-10-13 at 17.16.23.png
Screenshot 2021-10-13 at 17.15.20.png

Moreno Garcia

<moreno@scylladb.com>
unread,
Oct 13, 2021, 3:37:25 PM10/13/21
to scylladb-users@googlegroups.com
I was hoping for these:

image.png

Marcin

<marcin.wojno.mail@gmail.com>
unread,
Oct 14, 2021, 3:56:57 AM10/14/21
to ScyllaDB users
Oh, sorry. This is for the last case I mentioned - when after a few hours of upload it started to act strange (visible in the latencies graph starting after 18:00) until the very end of upload.memory-comps-replica-new.png

And this is the same graph for the initial case, where upload acted strange until +/- 12:10 UTC which was the pattern for at least a week straight no matter the time upload started (graphs are in CET, so moved 2 hours forward). It seems correlated with vibrations in the Non-LSA used memory:
memory-comps-replica-initial.png

Moreno Garcia

<moreno@scylladb.com>
unread,
Oct 14, 2021, 11:15:50 AM10/14/21
to scylladb-users@googlegroups.com
On Thu, Oct 14, 2021 at 4:57 AM Marcin <marcin.w...@gmail.com> wrote:
Oh, sorry. This is for the last case I mentioned - when after a few hours of upload it started to act strange (visible in the latencies graph starting after 18:00) until the very end of upload.memory-comps-replica-new.png

And this is the same graph for the initial case, where upload acted strange until +/- 12:10 UTC which was the pattern for at least a week straight no matter the time upload started (graphs are in CET, so moved 2 hours forward). It seems correlated with vibrations in the Non-LSA used memory:
memory-comps-replica-initial.png

image.png

It seems the time your problem started (background reads and writes spike and timeouts + hints) coincide. 

Avi, maybe it would be beneficial to set static compaction shares? 

Marcin

<marcin.wojno.mail@gmail.com>
unread,
Oct 19, 2021, 5:57:27 AM10/19/21
to ScyllaDB users
Are there any main concerns / risks with turning on that option or it's just worth trying?
Reply all
Reply to author
Forward
0 new messages