Hi. Sorry for the delay.
A1:
It because zstd is somewhat slower than lz4.
It matters when you query hot data (that reside in page cache) or when you are using high-speed disk subsystem (NVMe as an example),
and when your queries are rather cheap computationally.
Here are more details:
https://groups.google.com/d/msg/clickhouse/QXUXHCtRN90/f6T9lhGXCQAJA2:
Nothing wrong will happen. ClickHouse will apply new compression scheme for new data parts and while merging existing data parts..
Don't forget to use identical compression configuration on all replicas.
(If you have different compression configurations on replicas, nothing wrong will happen, except that replica will detect inconsistency after doing merge, and will download merged part from other replica instead of using locally merged part.)
A3:
Yes, you can. (You have to restart ClickHouse for new configuration to take effect.)
New compression scheme will take effect lazily: ClickHouse will apply new compression scheme for old data only while merging.
Data will not be forcefully re-merged and it is possible that some old data will not be re-merged at all.
To forcefully merge some data for new compression to take effect, you could run:
OPTIMIZE TABLE table PARTITION yyyymm FINAL
This will re-merge specified partition (you need to have enough free space on disk for merge to proceed).