Igor Sverkos
未讀,2017年8月16日 晚上8:06:202017/8/16登入以回覆作者
登入以轉寄訊息
你的權限不足,無法在這個群組刪除訊息
該群組的電子郵件地址為匿名,或你需要檢視成員電子郵件地址的權限才能查看原始貼文
收件者:percona-d...@googlegroups.com
Hi,
today I saw the following error for the first time while executing
"pt-table-checksum" v2.2.20:
> Skipping table <dbname>.<tablename> because on the master it would be checksummed in one chunk but on these replicas it has too many rows:
> 470585 rows on backup-db1
> The current chunk size limit is 463976 rows (chunk size=231988 * chunk size limit=2.0).
I read some bug reports and worked around passing '--chunk-size-limit
3' to the command.
However, I was wondering what's causing the difference so I looked
into the details:
> SELECT COUNT(*) AS n FROM <dbname>.<tablename>
> +--------+
> | n |
> +--------+
> | 470585 |
> +--------+
> SELECT TABLE_ROWS FROM information_schema.TABLES where table_name = '<tablename>';
> +------------+
> | TABLE_ROWS |
> +------------+
> | 426292 |
> +------------+
...so I learned about "innodb_stats_persistent_sample_page" which is set to 20.
Is it really the best way to just increase "chunk-size-limit"? Does
that work for larger tables as well? Isn't it a bad idea from
pt-table-checksum to rely on sampled values? Do you recommend a higher
"innodb_stats_persistent_sample_page" value when using
pt-table-checksum?
--
Regards,
Igor