Some performance questions

Skip to first unread message

Sascha Steinbiss

Aug 17, 2018, 9:15:29 AM8/17/18
Hi all,

I have got some questions regarding the performance of one of my
applications based on Sophia. I have observed some behavior in the
Sophia database part that is not obvious to me, and that might not be
straightforward from the documentation.

We are talking about an application that stores some metadata about
events ('observations', 'obs'), in particular the values are an integer
count and timestamp for each event as referenced by a long-ish string
key (which consists of concatenated values). These values are updated
via an upsert function that simply increments the count and updates the
timestamp to the one given in the upserted value.
Database writes are typically very update-heavy, with quite a few new
inserts as well. However, one would expect the new inserts to flatten
out at some point and the write pattern will be dominated by updates, as
most keys will have an entry in the DB already. We are aiming at a
couple thousands of updates per second.
We also have a second database ('inv') with a different key pattern,
which simply acts as an inverted index referencing keys in the first
database. Simple enough.

Everything is implemented in Go, with C bindings to Sophia 2.2 via


compaction.0 -> 1
compaction.0.compact_mode -> 1
compaction.0.branch_age -> 20
compaction.0.snapshot_period -> 20

db.observations.compaction.cache -> 4*1024*1024*1024
db.observations.compaction.gc_period -> 20
db.inverted.compaction.cache -> 4*1024*1024*1024
db.inverted.compaction.gc_period -> 20
scheduler.threads -> 5

db.observations.compression_key -> 1
db.inverted.compression_key -> 1
db.observations.compression_branch -> "lz4"
db.inverted.compression_branch -> "lz4"

db.observations.mmap -> 1
db.inverted.mmap -> 1

memory.limit -> 8*1024*1024*1024

Also, every 2 minutes I am setting the following values in the hope of
triggering some cleanup or compaction steps:

scheduler.checkpoint -> 0
db.observations.compact -> 1
db.inverted.compact -> 1
log.gc -> 1
log.rotate -> 1

As I am hoping is visible from the configuration above, I am aiming for
a data store which aggressively compacts the updates from the incoming
data in the background, as the machine it is running on has quite a
number of CPUs. At the same time I am looking to minimize overall disk
space (on a compressed ZFS volume) and I also would like to enforce a
RAM limit to stay out of swapping or OOM situations.
Pure database reads are less frequent and less in focus of optimization.
There might be a number of prefix searches on both databases, but
typically only returning a small number of items.

Some questions:

1. I have noticed that the "log" directory grows in size quite a lot and
without bound, exceeding the other two directories by far after some
hours of ingesting data. I have attached a plot that shows this
behavior, starting from an empty database. I would expect the log to
shrink in size between compactions, only holding the appended data in
between compactions.
Not sure whether I misunderstood the role of the append log, but this
is becoming a major issue when the log directory is multiple hundreds
of gigabytes in size, resulting in several hours of log replay after
a restart.
Did I misunderstand or mis-configure the compaction settings? Does
anyone have any insight to share or some complete working example
configurations? Is the process of moving appended data from the log
into the actual database directories what is usually meant by
compaction" or is this another process and compaction is really just
merging of individual files in these directories?

2. Similarly, what way is there to reduce restart time? From the
documentation I understand that snapshots are the way to do this.
Where are these snapshots stored (in the non-log directories?!) and
how can I check whether they have been created correctly?

3. How can I ensure a RAM limit? There is the "memory.limit" setting --
is this all I can do?

Apart from that, can you think of other ways to improve this
update-heavy write situation? The Sophia performance really looks good
for a while but mysteriously worsens after some time. I would be really
interested in making this work and keeping the performance good :)
I can share more information if you want later.

Best regards

Reply all
Reply to author
0 new messages