Hi.
We're facing a problem of too many wal files. The attachment is the log during db starting. We don't set 'max_total_wal_size'(default 0). But in the description in options.h we can see:
// Once write-ahead logs exceed this size, we will start forcing the flush of
// column families whose memtables are backed by the oldest live WAL file
// (i.e. the ones that are causing all the space amplification). If set to 0
// (default), we will dynamically choose the WAL size limit to be
//
[sum of all write_buffer_size * max_write_buffer_number] * 4 //
// For example, with 15 column families, each with
// write_buffer_size = 128 MB
// max_write_buffer_number = 6
// max_total_wal_size will be calculated to be [15 * 128MB * 6] * 4 = 45GB
//
// The RocksDB wiki has some discussion about how the WAL interacts
// with memtables and flushing of column families.
//
https://github.com/facebook/rocksdb/wiki/Column-Families //
// This option takes effect only when there are more than one column
// family as otherwise the wal size is dictated by the write_buffer_size.
//
// Default: 0
//
// Dynamically changeable through SetDBOptions() API.
uint64_t max_total_wal_size = 0;
From the info log in the attachment we can see that there're 7 column families. We set Options.write_buffer_size=64MB and Options.max_write_buffer_number=8. So the wal size is assumed to be less than 64MB * 8 * 4= 2048MB.
But actually as you can see there're more than 500 wal files when db started, and the wal file size is far greater than 2048MB.