Long delay reading 100 columns table.

17 views
Skip to first unread message

Cong Guo

unread,
Nov 17, 2021, 5:07:43 AM11/17/21
to ScyllaDB users
Hi I met long delay reading item from wide column table, below is the trace. Is it because the bad data modeling? Do you have any suggestions?

 session_id                           | event_id                             | activity                                                                                                                                                                                                                                                                                         | scylla_parent_id | scylla_span_id  | source         | source_elapsed | thread

--------------------------------------+--------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+------------------+-----------------+----------------+----------------+---------

 e26fb280-478b-11ec-964a-3739e0048e74 | e26fb337-478b-11ec-964a-3739e0048e74 |                                                                                                                                                                                                                                                                              Parsing a statement |                0 | 346750661426315 | 172.20.177.113 |              1 | shard 1

 e26fb280-478b-11ec-964a-3739e0048e74 | e26fb622-478b-11ec-964a-3739e0048e74 |                                                                                                                                                                                                                                                                           Processing a statement |                0 | 346750661426315 | 172.20.177.113 |             75 | shard 1

 e26fb280-478b-11ec-964a-3739e0048e74 | e26fb81f-478b-11ec-964a-3739e0048e74 |                                                                                                                                                                                                                                                                      read_data: querying locally |                0 | 346750661426315 | 172.20.177.113 |            127 | shard 1

 e26fb280-478b-11ec-964a-3739e0048e74 | e26fb884-478b-11ec-964a-3739e0048e74 |                                                                                                                                                                                                                                   Start querying token range (-inf, {-9193952103139534258, end}] |                0 | 346750661426315 | 172.20.177.113 |            137 | shard 1

 e26fb280-478b-11ec-964a-3739e0048e74 | e26fbc4c-478b-11ec-a5a0-3734e0048e74 |                                                                                                                                                                                                            Scanning cache for range (-inf, {-9193952103139534258, end}] and slice {(-inf, +inf)} |  346750661426315 | 215741960800011 | 172.20.177.113 |             21 | shard 0

 e26fb280-478b-11ec-964a-3739e0048e74 | e26fc922-478b-11ec-a5a0-3734e0048e74 | Reading partition range ({-9223371252538542622, pk{00162f704c4630692f5363643450716459484a3367375567}}, {-9223371021669129360, pk{0016694777754377394d70562b7a454847716e4255484641}}) from sstable /var/lib/scylla/data/sn_fstore/virtual_id-3155921041cf11ec85e9d796dfaa9909/md-3690-big-Data.db |  346750661426315 | 215741960800011 | 172.20.177.113 |            349 | shard 0

 e26fb280-478b-11ec-964a-3739e0048e74 | e26fc95e-478b-11ec-a5a0-3734e0048e74 | Reading partition range ({-9223371252538542622, pk{00162f704c4630692f5363643450716459484a3367375567}}, {-9223371021669129360, pk{0016694777754377394d70562b7a454847716e4255484641}}) from sstable /var/lib/scylla/data/sn_fstore/virtual_id-3155921041cf11ec85e9d796dfaa9909/md-3510-big-Data.db |  346750661426315 | 215741960800011 | 172.20.177.113 |            355 | shard 0

 e26fb280-478b-11ec-964a-3739e0048e74 | e26fc98b-478b-11ec-a5a0-3734e0048e74 | Reading partition range ({-9223371252538542622, pk{00162f704c4630692f5363643450716459484a3367375567}}, {-9223371021669129360, pk{0016694777754377394d70562b7a454847716e4255484641}}) from sstable /var/lib/scylla/data/sn_fstore/virtual_id-3155921041cf11ec85e9d796dfaa9909/md-3705-big-Data.db |  346750661426315 | 215741960800011 | 172.20.177.113 |            360 | shard 0

 e26fb280-478b-11ec-964a-3739e0048e74 | e26fca40-478b-11ec-a5a0-3734e0048e74 |                                                                                                                                               /var/lib/scylla/data/sn_fstore/virtual_id-3155921041cf11ec85e9d796dfaa9909/md-3690-big-Index.db: scheduling bulk DMA read of size 5799 at offset 0 |  346750661426315 | 215741960800011 | 172.20.177.113 |            378 | shard 0

 e26fb280-478b-11ec-964a-3739e0048e74 | e26fcaf6-478b-11ec-a5a0-3734e0048e74 |                                                                                                                                               /var/lib/scylla/data/sn_fstore/virtual_id-3155921041cf11ec85e9d796dfaa9909/md-3510-big-Index.db: scheduling bulk DMA read of size 2313 at offset 0 |  346750661426315 | 215741960800011 | 172.20.177.113 |            396 | shard 0

 e26fb280-478b-11ec-964a-3739e0048e74 | e26fd81e-478b-11ec-a5a0-3734e0048e74 |                                                                                                                   /var/lib/scylla/data/sn_fstore/virtual_id-3155921041cf11ec85e9d796dfaa9909/md-3690-big-Index.db: finished bulk DMA read of size 5799 at offset 0, successfully read 6144 bytes |  346750661426315 | 215741960800011 | 172.20.177.113 |            733 | shard 0

 e26fb280-478b-11ec-964a-3739e0048e74 | e26fd836-478b-11ec-a5a0-3734e0048e74 |                                                                                                                   /var/lib/scylla/data/sn_fstore/virtual_id-3155921041cf11ec85e9d796dfaa9909/md-3510-big-Index.db: finished bulk DMA read of size 2313 at offset 0, successfully read 2560 bytes |  346750661426315 | 215741960800011 | 172.20.177.113 |            735 | shard 0

 e26fb280-478b-11ec-964a-3739e0048e74 | e26fdbd0-478b-11ec-a5a0-3734e0048e74 |                                                                                                                                               /var/lib/scylla/data/sn_fstore/virtual_id-3155921041cf11ec85e9d796dfaa9909/md-3705-big-Index.db: scheduling bulk DMA read of size 9978 at offset 0 |  346750661426315 | 215741960800011 | 172.20.177.113 |            827 | shard 0

 e26fb280-478b-11ec-964a-3739e0048e74 | e26fe2c5-478b-11ec-a5a0-3734e0048e74 |                                                                                                                  /var/lib/scylla/data/sn_fstore/virtual_id-3155921041cf11ec85e9d796dfaa9909/md-3705-big-Index.db: finished bulk DMA read of size 9978 at offset 0, successfully read 10240 bytes |  346750661426315 | 215741960800011 | 172.20.177.113 |           1006 | shard 0

 e26fb280-478b-11ec-964a-3739e0048e74 | e26fe4df-478b-11ec-a5a0-3734e0048e74 |                                                                                                                                               /var/lib/scylla/data/sn_fstore/virtual_id-3155921041cf11ec85e9d796dfaa9909/md-3690-big-Data.db: scheduling bulk DMA read of size 32768 at offset 0 |  346750661426315 | 215741960800011 | 172.20.177.113 |           1059 | shard 0

 e26fb280-478b-11ec-964a-3739e0048e74 | e26fe505-478b-11ec-a5a0-3734e0048e74 |                                                                                                                                           /var/lib/scylla/data/sn_fstore/virtual_id-3155921041cf11ec85e9d796dfaa9909/md-3690-big-Data.db: scheduling bulk DMA read of size 32768 at offset 32768 |  346750661426315 | 215741960800011 | 172.20.177.113 |           1063 | shard 0

 e26fb280-478b-11ec-964a-3739e0048e74 | e26fe535-478b-11ec-a5a0-3734e0048e74 |                                                                                                                                            /var/lib/scylla/data/sn_fstore/virtual_id-3155921041cf11ec85e9d796dfaa9909/md-3510-big-Data.db: scheduling bulk DMA read of size 32768 at offset 1536 |  346750661426315 | 215741960800011 | 172.20.177.113 |           1068 | shard 0

 e26fb280-478b-11ec-964a-3739e0048e74 | e26fe555-478b-11ec-a5a0-3734e0048e74 |                                                                                                                                           /var/lib/scylla/data/sn_fstore/virtual_id-3155921041cf11ec85e9d796dfaa9909/md-3510-big-Data.db: scheduling bulk DMA read of size 32768 at offset 34304 |  346750661426315 | 215741960800011 | 172.20.177.113 |           1071 | shard 0

 e26fb280-478b-11ec-964a-3739e0048e74 | e26fe588-478b-11ec-a5a0-3734e0048e74 |                                                                                                                                               /var/lib/scylla/data/sn_fstore/virtual_id-3155921041cf11ec85e9d796dfaa9909/md-3705-big-Data.db: scheduling bulk DMA read of size 32768 at offset 0 |  346750661426315 | 215741960800011 | 172.20.177.113 |           1076 | shard 0

 e26fb280-478b-11ec-964a-3739e0048e74 | e26fe5ad-478b-11ec-a5a0-3734e0048e74 |                                                                                                                                           /var/lib/scylla/data/sn_fstore/virtual_id-3155921041cf11ec85e9d796dfaa9909/md-3705-big-Data.db: scheduling bulk DMA read of size 32768 at offset 32768 |  346750661426315 | 215741960800011 | 172.20.177.113 |           1080 | shard 0

 e26fb280-478b-11ec-964a-3739e0048e74 | e26feef6-478b-11ec-a5a0-3734e0048e74 |                                                                                                                  /var/lib/scylla/data/sn_fstore/virtual_id-3155921041cf11ec85e9d796dfaa9909/md-3690-big-Data.db: finished bulk DMA read of size 32768 at offset 0, successfully read 32768 bytes |  346750661426315 | 215741960800011 | 172.20.177.113 |           1318 | shard 0

 e26fb280-478b-11ec-964a-3739e0048e74 | e26fef0d-478b-11ec-a5a0-3734e0048e74 |                                                                                                              /var/lib/scylla/data/sn_fstore/virtual_id-3155921041cf11ec85e9d796dfaa9909/md-3690-big-Data.db: finished bulk DMA read of size 32768 at offset 32768, successfully read 32768 bytes |  346750661426315 | 215741960800011 | 172.20.177.113 |           1320 | shard 0

 e26fb280-478b-11ec-964a-3739e0048e74 | e26ff205-478b-11ec-a5a0-3734e0048e74 |                                                                                                               /var/lib/scylla/data/sn_fstore/virtual_id-3155921041cf11ec85e9d796dfaa9909/md-3510-big-Data.db: finished bulk DMA read of size 32768 at offset 1536, successfully read 32768 bytes |  346750661426315 | 215741960800011 | 172.20.177.113 |           1396 | shard 0

 e26fb280-478b-11ec-964a-3739e0048e74 | e26ff21b-478b-11ec-a5a0-3734e0048e74 |                                                                                                              /var/lib/scylla/data/sn_fstore/virtual_id-3155921041cf11ec85e9d796dfaa9909/md-3510-big-Data.db: finished bulk DMA read of size 32768 at offset 34304, successfully read 32768 bytes |  346750661426315 | 215741960800011 | 172.20.177.113 |           1398 | shard 0

 e26fb280-478b-11ec-964a-3739e0048e74 | e26ff231-478b-11ec-a5a0-3734e0048e74 |                                                                                                                  /var/lib/scylla/data/sn_fstore/virtual_id-3155921041cf11ec85e9d796dfaa9909/md-3705-big-Data.db: finished bulk DMA read of size 32768 at offset 0, successfully read 32768 bytes |  346750661426315 | 215741960800011 | 172.20.177.113 |           1400 | shard 0

 e26fb280-478b-11ec-964a-3739e0048e74 | e26ff246-478b-11ec-a5a0-3734e0048e74 |                                                                                                              /var/lib/scylla/data/sn_fstore/virtual_id-3155921041cf11ec85e9d796dfaa9909/md-3705-big-Data.db: finished bulk DMA read of size 32768 at offset 32768, successfully read 32768 bytes |  346750661426315 | 215741960800011 | 172.20.177.113 |           1403 | shard 0

 e26fb280-478b-11ec-964a-3739e0048e74 | e270a64d-478b-11ec-a5a0-3734e0048e74 |                                                                                                                                           /var/lib/scylla/data/sn_fstore/virtual_id-3155921041cf11ec85e9d796dfaa9909/md-3510-big-Data.db: scheduling bulk DMA read of size 32768 at offset 67072 |  346750661426315 | 215741960800011 | 172.20.177.113 |           6011 | shard 0

 e26fb280-478b-11ec-964a-3739e0048e74 | e270b300-478b-11ec-a5a0-3734e0048e74 |                                                                                                              /var/lib/scylla/data/sn_fstore/virtual_id-3155921041cf11ec85e9d796dfaa9909/md-3510-big-Data.db: finished bulk DMA read of size 32768 at offset 67072, successfully read 32768 bytes |  346750661426315 | 215741960800011 | 172.20.177.113 |           6336 | shard 0

 e26fb280-478b-11ec-964a-3739e0048e74 | e27137e9-478b-11ec-a5a0-3734e0048e74 |                                                                                                                                            /var/lib/scylla/data/sn_fstore/virtual_id-3155921041cf11ec85e9d796dfaa9909/md-3510-big-Index.db: scheduling bulk DMA read of size 2897 at offset 2048 |  346750661426315 | 215741960800011 | 172.20.177.113 |           9739 | shard 0

 e26fb280-478b-11ec-964a-3739e0048e74 | e2713efc-478b-11ec-a5a0-3734e0048e74 |                                                                                                                /var/lib/scylla/data/sn_fstore/virtual_id-3155921041cf11ec85e9d796dfaa9909/md-3510-big-Index.db: finished bulk DMA read of size 2897 at offset 2048, successfully read 3072 bytes |  346750661426315 | 215741960800011 | 172.20.177.113 |           9920 | shard 0

 e26fb280-478b-11ec-964a-3739e0048e74 | e2714f41-478b-11ec-a5a0-3734e0048e74 |                                                                                                                                           /var/lib/scylla/data/sn_fstore/virtual_id-3155921041cf11ec85e9d796dfaa9909/md-3510-big-Data.db: scheduling bulk DMA read of size 32768 at offset 99840 |  346750661426315 | 215741960800011 | 172.20.177.113 |          10336 | shard 0

 e26fb280-478b-11ec-964a-3739e0048e74 | e27159cc-478b-11ec-a5a0-3734e0048e74 |                                                                                                              /var/lib/scylla/data/sn_fstore/virtual_id-3155921041cf11ec85e9d796dfaa9909/md-3510-big-Data.db: finished bulk DMA read of size 32768 at offset 99840, successfully read 32768 bytes |  346750661426315 | 215741960800011 | 172.20.177.113 |          10606 | shard 0

 e26fb280-478b-11ec-964a-3739e0048e74 | e272026e-478b-11ec-a5a0-3734e0048e74 |                                                                                                                                          /var/lib/scylla/data/sn_fstore/virtual_id-3155921041cf11ec85e9d796dfaa9909/md-3510-big-Data.db: scheduling bulk DMA read of size 32768 at offset 132608 |  346750661426315 | 215741960800011 | 172.20.177.113 |          14923 | shard 0

 e26fb280-478b-11ec-964a-3739e0048e74 | e2720cb4-478b-11ec-a5a0-3734e0048e74 |                                                                                                             /var/lib/scylla/data/sn_fstore/virtual_id-3155921041cf11ec85e9d796dfaa9909/md-3510-big-Data.db: finished bulk DMA read of size 32768 at offset 132608, successfully read 32768 bytes |  346750661426315 | 215741960800011 | 172.20.177.113 |          15186 | shard 0

 e26fb280-478b-11ec-964a-3739e0048e74 | e272a315-478b-11ec-a5a0-3734e0048e74 |                                                                                                                                            /var/lib/scylla/data/sn_fstore/virtual_id-3155921041cf11ec85e9d796dfaa9909/md-3510-big-Index.db: scheduling bulk DMA read of size 3249 at offset 4608 |  346750661426315 | 215741960800011 | 172.20.177.113 |          19036 | shard 0

 e26fb280-478b-11ec-964a-3739e0048e74 | e272a97c-478b-11ec-a5a0-3734e0048e74 |                                                                                                                /var/lib/scylla/data/sn_fstore/virtual_id-3155921041cf11ec85e9d796dfaa9909/md-3510-big-Index.db: finished bulk DMA read of size 3249 at offset 4608, successfully read 3584 bytes |  346750661426315 | 215741960800011 | 172.20.177.113 |          19200 | shard 0

 e26fb280-478b-11ec-964a-3739e0048e74 | e272ba00-478b-11ec-a5a0-3734e0048e74 |                                                                                                                                          /var/lib/scylla/data/sn_fstore/virtual_id-3155921041cf11ec85e9d796dfaa9909/md-3510-big-Data.db: scheduling bulk DMA read of size 32768 at offset 165376 |  346750661426315 | 215741960800011 | 172.20.177.113 |          19623 | shard 0

 e26fb280-478b-11ec-964a-3739e0048e74 | e272c2f9-478b-11ec-a5a0-3734e0048e74 |                                                                                                             /var/lib/scylla/data/sn_fstore/virtual_id-3155921041cf11ec85e9d796dfaa9909/md-3510-big-Data.db: finished bulk DMA read of size 32768 at offset 165376, successfully read 32768 bytes |  346750661426315 | 215741960800011 | 172.20.177.113 |          19852 | shard 0

 e26fb280-478b-11ec-964a-3739e0048e74 | e2738a23-478b-11ec-a5a0-3734e0048e74 |                                                                                                                                          /var/lib/scylla/data/sn_fstore/virtual_id-3155921041cf11ec85e9d796dfaa9909/md-3510-big-Data.db: scheduling bulk DMA read of size 32768 at offset 198144 |  346750661426315 | 215741960800011 | 172.20.177.113 |          24951 | shard 0

 e26fb280-478b-11ec-964a-3739e0048e74 | e2739474-478b-11ec-a5a0-3734e0048e74 |                                                                                                             /var/lib/scylla/data/sn_fstore/virtual_id-3155921041cf11ec85e9d796dfaa9909/md-3510-big-Data.db: finished bulk DMA read of size 32768 at offset 198144, successfully read 32768 bytes |  346750661426315 | 215741960800011 | 172.20.177.113 |          25215 | shard 0

 e26fb280-478b-11ec-964a-3739e0048e74 | e273cc20-478b-11ec-a5a0-3734e0048e74 |                                                                                                                                           /var/lib/scylla/data/sn_fstore/virtual_id-3155921041cf11ec85e9d796dfaa9909/md-3690-big-Data.db: scheduling bulk DMA read of size 32768 at offset 65536 |  346750661426315 | 215741960800011 | 172.20.177.113 |          26640 | shard 0

 e26fb280-478b-11ec-964a-3739e0048e74 | e273d866-478b-11ec-a5a0-3734e0048e74 |                                                                                                              /var/lib/scylla/data/sn_fstore/virtual_id-3155921041cf11ec85e9d796dfaa9909/md-3690-big-Data.db: finished bulk DMA read of size 32768 at offset 65536, successfully read 32768 bytes |  346750661426315 | 215741960800011 | 172.20.177.113 |          26954 | shard 0

 e26fb280-478b-11ec-964a-3739e0048e74 | e27434cb-478b-11ec-a5a0-3734e0048e74 |                                                                                                                                            /var/lib/scylla/data/sn_fstore/virtual_id-3155921041cf11ec85e9d796dfaa9909/md-3510-big-Index.db: scheduling bulk DMA read of size 2809 at offset 7680 |  346750661426315 | 215741960800011 | 172.20.177.113 |          29320 | shard 0

 e26fb280-478b-11ec-964a-3739e0048e74 | e2743bbf-478b-11ec-a5a0-3734e0048e74 |                                                                                                                /var/lib/scylla/data/sn_fstore/virtual_id-3155921041cf11ec85e9d796dfaa9909/md-3510-big-Index.db: finished bulk DMA read of size 2809 at offset 7680, successfully read 3072 bytes |  346750661426315 | 215741960800011 | 172.20.177.113 |          29498 | shard 0

 e26fb280-478b-11ec-964a-3739e0048e74 | e2744b9c-478b-11ec-a5a0-3734e0048e74 |                                                                                                                                          /var/lib/scylla/data/sn_fstore/virtual_id-3155921041cf11ec85e9d796dfaa9909/md-3510-big-Data.db: scheduling bulk DMA read of size 32768 at offset 230912 |  346750661426315 | 215741960800011 | 172.20.177.113 |          29904 | shard 0

 e26fb280-478b-11ec-964a-3739e0048e74 | e2745728-478b-11ec-a5a0-3734e0048e74 |                                                                                                             /var/lib/scylla/data/sn_fstore/virtual_id-3155921041cf11ec85e9d796dfaa9909/md-3510-big-Data.db: finished bulk DMA read of size 32768 at offset 230912, successfully read 32768 bytes |  346750661426315 | 215741960800011 | 172.20.177.113 |          30199 | shard 0

 e26fb280-478b-11ec-964a-3739e0048e74 | e274b37c-478b-11ec-964a-3739e0048e74 |                                                                                                                                                                                                                                                                Creating shard reader on shard: 1 |                0 | 346750661426315 | 172.20.177.113 |          32776 | shard 1

 e26fb280-478b-11ec-964a-3739e0048e74 | e274b3f0-478b-11ec-964a-3739e0048e74 |                                                                                                                                                                                                            Scanning cache for range (-inf, {-9193952103139534258, end}] and slice {(-inf, +inf)} |                0 | 346750661426315 | 172.20.177.113 |          32788 | shard 1

 e26fb280-478b-11ec-964a-3739e0048e74 | e274b620-478b-11ec-964a-3739e0048e74 |                                                                         Reading partition range (-inf, {-9223068444895658024, pk{00165264454468556c5067442b7578357976715877694541}}) from sstable /var/lib/scylla/data/sn_fstore/virtual_id-3155921041cf11ec85e9d796dfaa9909/md-3676-big-Data.db |                0 | 346750661426315 | 172.20.177.113 |          32843 | shard 1

 e26fb280-478b-11ec-964a-3739e0048e74 | e274b64e-478b-11ec-964a-3739e0048e74 |                                                                         Reading partition range (-inf, {-9223068444895658024, pk{00165264454468556c5067442b7578357976715877694541}}) from sstable /var/lib/scylla/data/sn_fstore/virtual_id-3155921041cf11ec85e9d796dfaa9909/md-3481-big-Data.db |                0 | 346750661426315 | 172.20.177.113 |          32848 | shard 1

 e26fb280-478b-11ec-964a-3739e0048e74 | e274b676-478b-11ec-964a-3739e0048e74 |                                                                         Reading partition range (-inf, {-9223068444895658024, pk{00165264454468556c5067442b7578357976715877694541}}) from sstable /var/lib/scylla/data/sn_fstore/virtual_id-3155921041cf11ec85e9d796dfaa9909/md-3631-big-Data.db |                0 | 346750661426315 | 172.20.177.113 |          32852 | shard 1

 e26fb280-478b-11ec-964a-3739e0048e74 | e274b6a1-478b-11ec-964a-3739e0048e74 |                                                                         Reading partition range (-inf, {-9223068444895658024, pk{00165264454468556c5067442b7578357976715877694541}}) from sstable /var/lib/scylla/data/sn_fstore/virtual_id-3155921041cf11ec85e9d796dfaa9909/md-3691-big-Data.db |                0 | 346750661426315 | 172.20.177.113 |          32856 | shard 1

 e26fb280-478b-11ec-964a-3739e0048e74 | e274b792-478b-11ec-964a-3739e0048e74 |                                                                                                                                               /var/lib/scylla/data/sn_fstore/virtual_id-3155921041cf11ec85e9d796dfaa9909/md-3481-big-Index.db: scheduling bulk DMA read of size 2676 at offset 0 |                0 | 346750661426315 | 172.20.177.113 |          32880 | shard 1

 e26fb280-478b-11ec-964a-3739e0048e74 | e274be73-478b-11ec-9f01-373be0048e74 |                                                                                                                                                                                                            Scanning cache for range (-inf, {-9193952103139534258, end}] and slice {(-inf, +inf)} |  346750661426315 | 487322110807223 | 172.20.177.113 |             23 | shard 2

 e26fb280-478b-11ec-964a-3739e0048e74 | e274c1d9-478b-11ec-9f01-373be0048e74 | Reading partition range ({-9222769592701715911, pk{00166b354c7950742f4e616c553639446d565a7947617367}}, {-9222767859043588986, pk{00164675384b6e75773252686c775134515078714a314a67}}) from sstable /var/lib/scylla/data/sn_fstore/virtual_id-3155921041cf11ec85e9d796dfaa9909/md-3797-big-Data.db |  346750661426315 | 487322110807223 | 172.20.177.113 |            110 | shard 2

 e26fb280-478b-11ec-964a-3739e0048e74 | e274c212-478b-11ec-9f01-373be0048e74 | Reading partition range ({-9222769592701715911, pk{00166b354c7950742f4e616c553639446d565a7947617367}}, {-9222767859043588986, pk{00164675384b6e75773252686c775134515078714a314a67}}) from sstable /var/lib/scylla/data/sn_fstore/virtual_id-3155921041cf11ec85e9d796dfaa9909/md-3587-big-Data.db |  346750661426315 | 487322110807223 | 172.20.177.113 |            116 | shard 2

 e26fb280-478b-11ec-964a-3739e0048e74 | e274c240-478b-11ec-9f01-373be0048e74 | Reading partition range ({-9222769592701715911, pk{00166b354c7950742f4e616c553639446d565a7947617367}}, {-9222767859043588986, pk{00164675384b6e75773252686c775134515078714a314a67}}) from sstable /var/lib/scylla/data/sn_fstore/virtual_id-3155921041cf11ec85e9d796dfaa9909/md-3692-big-Data.db |  346750661426315 | 487322110807223 | 172.20.177.113 |            121 | shard 2

 e26fb280-478b-11ec-964a-3739e0048e74 | e274c24b-478b-11ec-964a-3739e0048e74 |                                                                                                                   /var/lib/scylla/data/sn_fstore/virtual_id-3155921041cf11ec85e9d796dfaa9909/md-3481-big-Index.db: finished bulk DMA read of size 2676 at offset 0, successfully read 3072 bytes |                0 | 346750661426315 | 172.20.177.113 |          33155 | shard 1

 e26fb280-478b-11ec-964a-3739e0048e74 | e274c271-478b-11ec-9f01-373be0048e74 | Reading partition range ({-9222769592701715911, pk{00166b354c7950742f4e616c553639446d565a7947617367}}, {-9222767859043588986, pk{00164675384b6e75773252686c775134515078714a314a67}}) from sstable /var/lib/scylla/data/sn_fstore/virtual_id-3155921041cf11ec85e9d796dfaa9909/md-3752-big-Data.db |  346750661426315 | 487322110807223 | 172.20.177.113 |            126 | shard 2

 e26fb280-478b-11ec-964a-3739e0048e74 | e274c29d-478b-11ec-9f01-373be0048e74 | Reading partition range ({-9222769592701715911, pk{00166b354c7950742f4e616c553639446d565a7947617367}}, {-9222767859043588986, pk{00164675384b6e75773252686c775134515078714a314a67}}) from sstable /var/lib/scylla/data/sn_fstore/virtual_id-3155921041cf11ec85e9d796dfaa9909/md-3812-big-Data.db |  346750661426315 | 487322110807223 | 172.20.177.113 |            130 | shard 2

 e26fb280-478b-11ec-964a-3739e0048e74 | e274c330-478b-11ec-9f01-373be0048e74 |                                                                                                                                               /var/lib/scylla/data/sn_fstore/virtual_id-3155921041cf11ec85e9d796dfaa9909/md-3797-big-Index.db: scheduling bulk DMA read of size 9839 at offset 0 |  346750661426315 | 487322110807223 | 172.20.177.113 |            145 | shard 2

 e26fb280-478b-11ec-964a-3739e0048e74 | e274c35b-478b-11ec-964a-3739e0048e74 |                                                                                                                                               /var/lib/scylla/data/sn_fstore/virtual_id-3155921041cf11ec85e9d796dfaa9909/md-3481-big-Data.db: scheduling bulk DMA read of size 32768 at offset 0 |                0 | 346750661426315 | 172.20.177.113 |          33182 | shard 1

 e26fb280-478b-11ec-964a-3739e0048e74 | e274c38a-478b-11ec-964a-3739e0048e74 |                                                                                                                                           /var/lib/scylla/data/sn_fstore/virtual_id-3155921041cf11ec85e9d796dfaa9909/md-3481-big-Data.db: scheduling bulk DMA read of size 32768 at offset 32768 |                0 | 346750661426315 | 172.20.177.113 |          33187 | shard 1

 e26fb280-478b-11ec-964a-3739e0048e74 | e274c3d3-478b-11ec-9f01-373be0048e74 |                                                                                                                                               /var/lib/scylla/data/sn_fstore/virtual_id-3155921041cf11ec85e9d796dfaa9909/md-3587-big-Index.db: scheduling bulk DMA read of size 2731 at offset 0 |  346750661426315 | 487322110807223 | 172.20.177.113 |            161 | shard 2

 e26fb280-478b-11ec-964a-3739e0048e74 | e274c4c4-478b-11ec-9f01-373be0048e74 |                                                                                                                                               /var/lib/scylla/data/sn_fstore/virtual_id-3155921041cf11ec85e9d796dfaa9909/md-3752-big-Index.db: scheduling bulk DMA read of size 9450 at offset 0 |  346750661426315 | 487322110807223 | 172.20.177.113 |            185 | shard 2

 e26fb280-478b-11ec-964a-3739e0048e74 | e274cb94-478b-11ec-964a-3739e0048e74 |                                                                                                                  /var/lib/scylla/data/sn_fstore/virtual_id-3155921041cf11ec85e9d796dfaa9909/md-3481-big-Data.db: finished bulk DMA read of size 32768 at offset 0, successfully read 32768 bytes |                0 | 346750661426315 | 172.20.177.113 |          33392 | shard 1

 e26fb280-478b-11ec-964a-3739e0048e74 | e274cbb7-478b-11ec-964a-3739e0048e74 |                                                                                                              /var/lib/scylla/data/sn_fstore/virtual_id-3155921041cf11ec85e9d796dfaa9909/md-3481-big-Data.db: finished bulk DMA read of size 32768 at offset 32768, successfully read 32768 bytes |                0 | 346750661426315 | 172.20.177.113 |          33396 | shard 1

 e26fb280-478b-11ec-964a-3739e0048e74 | e274cea5-478b-11ec-9f01-373be0048e74 |                                                                                                                   /var/lib/scylla/data/sn_fstore/virtual_id-3155921041cf11ec85e9d796dfaa9909/md-3587-big-Index.db: finished bulk DMA read of size 2731 at offset 0, successfully read 3072 bytes |  346750661426315 | 487322110807223 | 172.20.177.113 |            438 | shard 2

 e26fb280-478b-11ec-964a-3739e0048e74 | e274cec1-478b-11ec-9f01-373be0048e74 |                                                                                                                   /var/lib/scylla/data/sn_fstore/virtual_id-3155921041cf11ec85e9d796dfaa9909/md-3752-big-Index.db: finished bulk DMA read of size 9450 at offset 0, successfully read 9728 bytes |  346750661426315 | 487322110807223 | 172.20.177.113 |            441 | shard 2

 e26fb280-478b-11ec-964a-3739e0048e74 | e274cedb-478b-11ec-9f01-373be0048e74 |                                                                                                                  /var/lib/scylla/data/sn_fstore/virtual_id-3155921041cf11ec85e9d796dfaa9909/md-3797-big-Index.db: finished bulk DMA read of size 9839 at offset 0, successfully read 10240 bytes |  346750661426315 | 487322110807223 | 172.20.177.113 |            443 | shard 2

 e26fb280-478b-11ec-964a-3739e0048e74 | e274d64d-478b-11ec-9f01-373be0048e74 |                                                                                                                                               /var/lib/scylla/data/sn_fstore/virtual_id-3155921041cf11ec85e9d796dfaa9909/md-3587-big-Data.db: scheduling bulk DMA read of size 32768 at offset 0 |  346750661426315 | 487322110807223 | 172.20.177.113 |            634 | shard 2

 e26fb280-478b-11ec-964a-3739e0048e74 | e274d67b-478b-11ec-9f01-373be0048e74 |                                                                                                                                           /var/lib/scylla/data/sn_fstore/virtual_id-3155921041cf11ec85e9d796dfaa9909/md-3587-big-Data.db: scheduling bulk DMA read of size 32768 at offset 32768 |  346750661426315 | 487322110807223 | 172.20.177.113 |            639 | shard 2

 e26fb280-478b-11ec-964a-3739e0048e74 | e274d6a9-478b-11ec-9f01-373be0048e74 |                                                                                                                                               /var/lib/scylla/data/sn_fstore/virtual_id-3155921041cf11ec85e9d796dfaa9909/md-3752-big-Data.db: scheduling bulk DMA read of size 32768 at offset 0 |  346750661426315 | 487322110807223 | 172.20.177.113 |            643 | shard 2

 e26fb280-478b-11ec-964a-3739e0048e74 | e274d6cb-478b-11ec-9f01-373be0048e74 |                                                                                                                                           /var/lib/scylla/data/sn_fstore/virtual_id-3155921041cf11ec85e9d796dfaa9909/md-3752-big-Data.db: scheduling bulk DMA read of size 32768 at offset 32768 |  346750661426315 | 487322110807223 | 172.20.177.113 |            647 | shard 2

 e26fb280-478b-11ec-964a-3739e0048e74 | e274d6f8-478b-11ec-9f01-373be0048e74 |                                                                                                                                               /var/lib/scylla/data/sn_fstore/virtual_id-3155921041cf11ec85e9d796dfaa9909/md-3797-big-Data.db: scheduling bulk DMA read of size 32768 at offset 0 |  346750661426315 | 487322110807223 | 172.20.177.113 |            651 | shard 2

 e26fb280-478b-11ec-964a-3739e0048e74 | e274d71e-478b-11ec-9f01-373be0048e74 |                                                                                                                                           /var/lib/scylla/data/sn_fstore/virtual_id-3155921041cf11ec85e9d796dfaa9909/md-3797-big-Data.db: scheduling bulk DMA read of size 32768 at offset 32768 |  346750661426315 | 487322110807223 | 172.20.177.113 |            655 | shard 2

 e26fb280-478b-11ec-964a-3739e0048e74 | e274d9a7-478b-11ec-964a-3739e0048e74 |                                                                                                                                               /var/lib/scylla/data/sn_fstore/virtual_id-3155921041cf11ec85e9d796dfaa9909/md-3631-big-Index.db: scheduling bulk DMA read of size 5154 at offset 0 |                0 | 346750661426315 | 172.20.177.113 |          33753 | shard 1

 e26fb280-478b-11ec-964a-3739e0048e74 | e274da58-478b-11ec-964a-3739e0048e74 |                                                                                                                                              /var/lib/scylla/data/sn_fstore/virtual_id-3155921041cf11ec85e9d796dfaa9909/md-3691-big-Index.db: scheduling bulk DMA read of size 10006 at offset 0 |                0 | 346750661426315 | 172.20.177.113 |          33771 | shard 1

 e26fb280-478b-11ec-964a-3739e0048e74 | e274e1d6-478b-11ec-9f01-373be0048e74 |                                                                                                                  /var/lib/scylla/data/sn_fstore/virtual_id-3155921041cf11ec85e9d796dfaa9909/md-3797-big-Data.db: finished bulk DMA read of size 32768 at offset 0, successfully read 32768 bytes |  346750661426315 | 487322110807223 | 172.20.177.113 |            929 | shard 2

 e26fb280-478b-11ec-964a-3739e0048e74 | e274e1f6-478b-11ec-9f01-373be0048e74 |                                                                                                                  /var/lib/scylla/data/sn_fstore/virtual_id-3155921041cf11ec85e9d796dfaa9909/md-3752-big-Data.db: finished bulk DMA read of size 32768 at offset 0, successfully read 32768 bytes |  346750661426315 | 487322110807223 | 172.20.177.113 |            932 | shard 2

 e26fb280-478b-11ec-964a-3739e0048e74 | e274e217-478b-11ec-9f01-373be0048e74 |                                                                                                              /var/lib/scylla/data/sn_fstore/virtual_id-3155921041cf11ec85e9d796dfaa9909/md-3797-big-Data.db: finished bulk DMA read of size 32768 at offset 32768, successfully read 32768 bytes |  346750661426315 | 487322110807223 | 172.20.177.113 |            935 | shard 2

 e26fb280-478b-11ec-964a-3739e0048e74 | e274e234-478b-11ec-9f01-373be0048e74 |                                                                                                              /var/lib/scylla/data/sn_fstore/virtual_id-3155921041cf11ec85e9d796dfaa9909/md-3752-big-Data.db: finished bulk DMA read of size 32768 at offset 32768, successfully read 32768 bytes |  346750661426315 | 487322110807223 | 172.20.177.113 |            939 | shard 2

 e26fb280-478b-11ec-964a-3739e0048e74 | e274e251-478b-11ec-9f01-373be0048e74 |                                                                                                                  /var/lib/scylla/data/sn_fstore/virtual_id-3155921041cf11ec85e9d796dfaa9909/md-3587-big-Data.db: finished bulk DMA read of size 32768 at offset 0, successfully read 32768 bytes |  346750661426315 | 487322110807223 | 172.20.177.113 |            942 | shard 2

 e26fb280-478b-11ec-964a-3739e0048e74 | e274e26f-478b-11ec-9f01-373be0048e74 |                                                                                                              /var/lib/scylla/data/sn_fstore/virtual_id-3155921041cf11ec85e9d796dfaa9909/md-3587-big-Data.db: finished bulk DMA read of size 32768 at offset 32768, successfully read 32768 bytes |  346750661426315 | 487322110807223 | 172.20.177.113 |            945 | shard 2

 e26fb280-478b-11ec-964a-3739e0048e74 | e274e36f-478b-11ec-964a-3739e0048e74 |                                                                                                                   /var/lib/scylla/data/sn_fstore/virtual_id-3155921041cf11ec85e9d796dfaa9909/md-3631-big-Index.db: finished bulk DMA read of size 5154 at offset 0, successfully read 5632 bytes |                0 | 346750661426315 | 172.20.177.113 |          34003 | shard 1

 e26fb280-478b-11ec-964a-3739e0048e74 | e274e388-478b-11ec-964a-3739e0048e74 |                                                                                                                 /var/lib/scylla/data/sn_fstore/virtual_id-3155921041cf11ec85e9d796dfaa9909/md-3691-big-Index.db: finished bulk DMA read of size 10006 at offset 0, successfully read 10240 bytes |                0 | 346750661426315 | 172.20.177.113 |          34006 | shard 1

 e26fb280-478b-11ec-964a-3739e0048e74 | e274e731-478b-11ec-964a-3739e0048e74 |                                                                                                                                               /var/lib/scylla/data/sn_fstore/virtual_id-3155921041cf11ec85e9d796dfaa9909/md-3631-big-Data.db: scheduling bulk DMA read of size 32768 at offset 0 |                0 | 346750661426315 | 172.20.177.113 |          34100 | shard 1

 e26fb280-478b-11ec-964a-3739e0048e74 | e274e75a-478b-11ec-964a-3739e0048e74 |                                                                                                                                           /var/lib/scylla/data/sn_fstore/virtual_id-3155921041cf11ec85e9d796dfaa9909/md-3631-big-Data.db: scheduling bulk DMA read of size 32768 at offset 32768 |                0 | 346750661426315 | 172.20.177.113 |          34104 | shard 1

 e26fb280-478b-11ec-964a-3739e0048e74 | e274ef05-478b-11ec-964a-3739e0048e74 |                                                                                                                  /var/lib/scylla/data/sn_fstore/virtual_id-3155921041cf11ec85e9d796dfaa9909/md-3631-big-Data.db: finished bulk DMA read of size 32768 at offset 0, successfully read 32768 bytes |                0 | 346750661426315 | 172.20.177.113 |          34300 | shard 1

 e26fb280-478b-11ec-964a-3739e0048e74 | e274ef1c-478b-11ec-964a-3739e0048e74 |                                                                                                              /var/lib/scylla/data/sn_fstore/virtual_id-3155921041cf11ec85e9d796dfaa9909/md-3631-big-Data.db: finished bulk DMA read of size 32768 at offset 32768, successfully read 32768 bytes |                0 | 346750661426315 | 172.20.177.113 |          34302 | shard 1

 e26fb280-478b-11ec-964a-3739e0048e74 | e27501e4-478b-11ec-964a-3739e0048e74 |                                                                                                                                               /var/lib/scylla/data/sn_fstore/virtual_id-3155921041cf11ec85e9d796dfaa9909/md-3676-big-Index.db: scheduling bulk DMA read of size 9923 at offset 0 |                0 | 346750661426315 | 172.20.177.113 |          34782 | shard 1

 e26fb280-478b-11ec-964a-3739e0048e74 | e2750929-478b-11ec-964a-3739e0048e74 |                                                                                                                  /var/lib/scylla/data/sn_fstore/virtual_id-3155921041cf11ec85e9d796dfaa9909/md-3676-big-Index.db: finished bulk DMA read of size 9923 at offset 0, successfully read 10240 bytes |                0 | 346750661426315 | 172.20.177.113 |          34969 | shard 1

 e26fb280-478b-11ec-964a-3739e0048e74 | e2750c41-478b-11ec-964a-3739e0048e74 |                                                                                                                                               /var/lib/scylla/data/sn_fstore/virtual_id-3155921041cf11ec85e9d796dfaa9909/md-3676-big-Data.db: scheduling bulk DMA read of size 32768 at offset 0 |                0 | 346750661426315 | 172.20.177.113 |          35048 | shard 1

 e26fb280-478b-11ec-964a-3739e0048e74 | e2750c95-478b-11ec-964a-3739e0048e74 |                                                                                                                                           /var/lib/scylla/data/sn_fstore/virtual_id-3155921041cf11ec85e9d796dfaa9909/md-3676-big-Data.db: scheduling bulk DMA read of size 32768 at offset 32768 |                0 | 346750661426315 | 172.20.177.113 |          35057 | shard 1

 e26fb280-478b-11ec-964a-3739e0048e74 | e27514ca-478b-11ec-964a-3739e0048e74 |                                                                                                                  /var/lib/scylla/data/sn_fstore/virtual_id-3155921041cf11ec85e9d796dfaa9909/md-3676-big-Data.db: finished bulk DMA read of size 32768 at offset 0, successfully read 32768 bytes |                0 | 346750661426315 | 172.20.177.113 |          35267 | shard 1

 e26fb280-478b-11ec-964a-3739e0048e74 | e27514e2-478b-11ec-964a-3739e0048e74 |                                                                                                              /var/lib/scylla/data/sn_fstore/virtual_id-3155921041cf11ec85e9d796dfaa9909/md-3676-big-Data.db: finished bulk DMA read of size 32768 at offset 32768, successfully read 32768 bytes |                0 | 346750661426315 | 172.20.177.113 |          35269 | shard 1

 e26fb280-478b-11ec-964a-3739e0048e74 | e2756bad-478b-11ec-964a-3739e0048e74 |                                                                                                                                           /var/lib/scylla/data/sn_fstore/virtual_id-3155921041cf11ec85e9d796dfaa9909/md-3481-big-Data.db: scheduling bulk DMA read of size 32768 at offset 65536 |                0 | 346750661426315 | 172.20.177.113 |          37491 | shard 1

 e26fb280-478b-11ec-964a-3739e0048e74 | e27587bf-478b-11ec-964a-3739e0048e74 |                                                                                                              /var/lib/scylla/data/sn_fstore/virtual_id-3155921041cf11ec85e9d796dfaa9909/md-3481-big-Data.db: finished bulk DMA read of size 32768 at offset 65536, successfully read 32768 bytes |                0 | 346750661426315 | 172.20.177.113 |          38210 | shard 1

 e26fb280-478b-11ec-964a-3739e0048e74 | e275ad57-478b-11ec-964a-3739e0048e74 |                                                                                                                                               /var/lib/scylla/data/sn_fstore/virtual_id-3155921041cf11ec85e9d796dfaa9909/md-3691-big-Data.db: scheduling bulk DMA read of size 32768 at offset 0 |                0 | 346750661426315 | 172.20.177.113 |          39172 | shard 1

 e26fb280-478b-11ec-964a-3739e0048e74 | e275ad85-478b-11ec-964a-3739e0048e74 |                                                                                                                                           /var/lib/scylla/data/sn_fstore/virtual_id-3155921041cf11ec85e9d796dfaa9909/md-3691-big-Data.db: scheduling bulk DMA read of size 32768 at offset 32768 |                0 | 346750661426315 | 172.20.177.113 |          39177 | shard 1

Cong Guo

unread,
Nov 17, 2021, 5:14:23 AM11/17/21
to ScyllaDB users
I think it's because we read two many columns in one query?

Shlomi Livne

unread,
Nov 17, 2021, 5:17:53 AM11/17/21
to ScyllaDB users
Can you provide the schema and the query 



--
You received this message because you are subscribed to the Google Groups "ScyllaDB users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to scylladb-user...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/scylladb-users/8b49765d-f869-453a-a44d-b3b6fa15b90bn%40googlegroups.com.

Nadav Har'El

unread,
Nov 17, 2021, 5:23:58 AM11/17/21
to ScyllaDB users
On Wed, Nov 17, 2021 at 12:07 PM 'Cong Guo' via ScyllaDB users <scyllad...@googlegroups.com> wrote:
Hi I met long delay reading item from wide column table, below is the trace. Is it because the bad data modeling? Do you have any suggestions?

 session_id                           | event_id                             | activity                                                                                                                                                                                                                                                                                         | scylla_parent_id | scylla_span_id  | source         | source_elapsed | thread

--------------------------------------+--------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+------------------+-----------------+----------------+----------------+---------

 e26fb280-478b-11ec-964a-3739e0048e74 | e26fb337-478b-11ec-964a-3739e0048e74 |                                                                                                                                                                                                                                                                              Parsing a statement |                0 | 346750661426315 | 172.20.177.113 |              1 | shard 1

 e26fb280-478b-11ec-964a-3739e0048e74 | e26fb622-478b-11ec-964a-3739e0048e74 |                                                                                                                                                                                                                                                                           Processing a statement |                0 | 346750661426315 | 172.20.177.113 |             75 | shard 1

 e26fb280-478b-11ec-964a-3739e0048e74 | e26fb81f-478b-11ec-964a-3739e0048e74 |                                                                                                                                                                                                                                                                      read_data: querying locally |                0 | 346750661426315 | 172.20.177.113 |            127 | shard 1

 e26fb280-478b-11ec-964a-3739e0048e74 | e26fb884-478b-11ec-964a-3739e0048e74 |                                                                                                                                                                                                                                   Start querying token range (-inf, {-9193952103139534258, end}] |                0 | 346750661426315 | 172.20.177.113 |            137 | shard 1


Please correct me if I'm misunderstanding what I'm seeing, but It looks like this is some sort of scan of a subset of the entire table, not a query of a specific partition?
(you can clarify what you're doing by giving us an example query).

Scans indeed have signficantly higher latencies than single-partition queries - even for tiny tables where the scan is expected to return just a handful of results, Scylla may still need to contact many of the nodes, and many shards on each node, to get pieces of the result and build them together. In fact, scans are particularly inefficient when the table is tiny and become quite efficient again when the table has a lot of data that needs to be scanned.
 
--
You received this message because you are subscribed to the Google Groups "ScyllaDB users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to scylladb-user...@googlegroups.com.

Avi Kivity

unread,
Nov 17, 2021, 5:26:44 AM11/17/21
to scyllad...@googlegroups.com, Nadav Har'El
On 11/17/21 12:23, Nadav Har'El wrote:
> Scans indeed have signficantly higher latencies than single-partition
> queries - even for tiny tables where the scan is expected to return
> just a handful of results, Scylla may still need to contact many of
> the nodes, and many shards on each node, to get pieces of the result
> and build them together. In fact, scans are particularly inefficient
> when the table is tiny and become quite efficient again when the table
> has a lot of data that needs to be scanned.


In fact scans are less efficient for small tables, if you measure
CPU/disk cost per returned row. A table with one row will need to access
every node several times, so the cost per row will be large. An empty
table will be even worse!


A full scan of a large table is quite efficient, per row, since each
access returns many rows.

Message has been deleted

Cong Guo

unread,
Nov 17, 2021, 5:34:34 AM11/17/21
to ScyllaDB users
Indeed a scan op.

On Wednesday, November 17, 2021 at 6:32:45 PM UTC+8 Cong Guo wrote:

Hi, below is the query and schema.

SELECT vid FROM virtual_id LIMIT 5000


CREATE TABLE fs.vid (

    vid text PRIMARY KEY,

    aadmin_area_id int,

    aaverage_distance float,

    a_code text,

    a_id int,

    adata_point_count int,

    af_ts bigint,

    alatitude float,

    alocality_id int,

    alocation_ids list<int>,

    alongitude float,

    aneighborhood_id int,

    asub_admin_area_id int,

    asub_locality_id int,

    atotal_data_point_count int,

    badmin_area_id int,

    baverage_distance float,

    b_code text,

    b_id int,

    bdata_point_count int,

    bf_ts bigint,

    blatitude float,

    blocality_id int,

    blocation_ids list<int>,

    blongitude float,

    bneighborhood_id int,

    bsub_admin_area_id int,

    bsub_locality_id int,

    btotal_data_point_count int,

    cadmin_area_id int,

    cadmin_area_lower_percentage float,

    cadmin_area_upper_percentage float,

    c_code text,

    c_id int,

    c_lower_percentage float,

    c_upper_percentage float,

    cf_ts bigint,

    clocality_id int,

    clocality_lower_percentage float,

    clocality_upper_percentage float,

    clower_income float,

    csegment text,

    csub_locality_id int,

    csub_locality_lower_percentage float,

    csub_locality_upper_percentage float,

    csub_locality_upper_percentage float,

    cupper_income float,

    dactive_days int,

    ddeparture_weights map<text, float>,

    ddestination_weights map<text, float>,

    df_ts bigint,

    dfrom_date text,

    dline_weights map<text, float>,

    dstation_weights map<text, float>,

    dto_date text,

    dzone_id text,

    dactive_days int,

    ddeparture_weights map<text, float>,

    ddestination_weights map<text, float>,

    df_ts bigint,

    dfrom_date text,

    dline_weights map<text, float>,

    dstation_weights map<text, float>,

    dto_date text,

    dzone_id text,

    dadmin_area text,

    dadmin_area_id int,

    d_code text,

    d_id int,

    d_name text,

    dcreated_at int,

    ddeleted_at int,

    df_ts bigint,

    dlocality text,

    dlocality_id int,

    dneighborhood_id int,

    dpostal_code text,

    dsource text,

    dsource_order int,

    dsub_admin_area text,

    dsub_admin_area_id int,

    dsub_locality text,

    dsub_locality_id int,

    dthoroughfare text,

    dupdated_at int,

    dadmin_area text,

    dadmin_area_id int,

    d_code text,

    d_id int,

    d_name text,

    dcreated_at int,

    ddeleted_at int,

    df_ts bigint,

    dlocality text,

    dlocality_id int,

    dneighborhood_id int,

    dpostal_code text,

    dsource text,

    dsource_order int,

    dsub_admin_area text,

    dsub_admin_area_id int,

    dsub_locality text,

    dsub_locality_id int,

    dthoroughfare text,

    dupdated_at int

) WITH bloom_filter_fp_chance = 0.01

    AND caching = {'keys': 'ALL', 'rows_per_partition': 'ALL'}

    AND comment = ''

    AND compaction = {'class': 'SizeTieredCompactionStrategy'}

    AND compression = {'sstable_compression': 'org.apache.cassandra.io.compress.LZ4Compressor'}

    AND crc_check_chance = 1.0

    AND dclocal_read_repair_chance = 0.0

    AND default_time_to_live = 0

    AND gc_grace_seconds = 864000

    AND max_index_interval = 2048

    AND memtable_flush_period_in_ms = 0

    AND min_index_interval = 128

    AND read_repair_chance = 0.0

    AND speculative_retry = '99.0PERCENTILE';

Cong Guo

unread,
Nov 17, 2021, 6:38:47 AM11/17/21
to ScyllaDB users
Do we have any instructions on how many columns could be created in one table?

seems "selecting 100 columns" is much slower than "selecting 50 columns" in a 100 columns table. What's the upper limit of the columns number and what is the best practice to design?

Avi Kivity

unread,
Nov 17, 2021, 6:45:06 AM11/17/21
to scyllad...@googlegroups.com

If your schema needs 100 columns, it's better to use 100 columns than to hack it. There's overhead in maintaining 100 columns but it's not terrible.


For the schema below, it seems repetitive. You can consider making a/b/c/d a clustering key and then each row contains just one set of columns, instead of 4.

Cong Guo

unread,
Nov 17, 2021, 7:02:55 AM11/17/21
to ScyllaDB users
Thanks, but if divided to 4, will the performance degrade if I want to query them by partition key? I think size should be taken into consideration as well to avoid large partition?

Avi Kivity

unread,
Nov 17, 2021, 7:09:44 AM11/17/21
to scyllad...@googlegroups.com

The partition size isn't going to change if you split it from one larger row into 4 smaller rows. Large partitions become problematic at hundreds of megabytes, not a few kilobytes.

Nadav Har'El

unread,
Nov 17, 2021, 7:12:08 AM11/17/21
to ScyllaDB users

On Wed, Nov 17, 2021 at 12:34 PM 'Cong Guo' via ScyllaDB users <scyllad...@googlegroups.com> wrote:
Indeed a scan op.

I know we can discuss in details the latency of this specific scan or why the number of columns seems to effect it, but before we dive into it, it's better to understand if this is a "real" workload or some test which is meant to approximate a real workload and perhaps doesn't.

The reason I'm asking this is that as I and Avi explained, a scan of a small table behaves nothing like a scan of a big table. Both big and small scans need to access all the nodes, and all CPUs, in the cluster, to complete the scan. For a large scan it's perfectly fine - this constant overhead is divided into millions of rows being read. But for a small table, this constant overhead gives a noticeably high latency. The question is whether you really care about this - is this what your real workload needs to do?
 

Cong Guo

unread,
Nov 17, 2021, 8:01:06 AM11/17/21
to ScyllaDB users
Thank you both. Actually I do scan randomly, normally for test reason at the loader initialization phase. So please forget it currently.

But when I try to select half columns, I got the trace below. Seems the time is spent on the communication between nodes?


 session_id                           | event_id                             | activity                                                                                                                                                       | scylla_parent_id | scylla_span_id  | source        | source_elapsed | thread

--------------------------------------+--------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------+------------------+-----------------+---------------+----------------+---------

 99eec980-47a2-11ec-bf75-8936f65a44d7 | 99eee1f7-47a2-11ec-bf75-8936f65a44d7 |                                                                                                                                            Parsing a statement |                0 | 265401180125439 | 172.20.147.55 |              1 | shard 5

 99eec980-47a2-11ec-bf75-8936f65a44d7 | 99ef160e-47a2-11ec-bf75-8936f65a44d7 |                                                                                                                                         Processing a statement |                0 | 265401180125439 | 172.20.147.55 |           1334 | shard 5

 99eec980-47a2-11ec-bf75-8936f65a44d7 | 99ef1882-47a2-11ec-bf75-8936f65a44d7 | Creating read executor for token -9221536121029947242 with all: {172.20.147.55, 172.20.177.113, 172.20.192.148} targets: {172.20.147.55} repair decision: NONE |                0 | 265401180125439 | 172.20.147.55 |           1397 | shard 5

 99eec980-47a2-11ec-bf75-8936f65a44d7 | 99ef18b4-47a2-11ec-bf75-8936f65a44d7 |                                                                                                                                    read_data: querying locally |                0 | 265401180125439 | 172.20.147.55 |           1403 | shard 5

 99eec980-47a2-11ec-bf75-8936f65a44d7 | 99f1efb0-47a2-11ec-bf75-8936f65a44d7 |                                                                                                                read_data: sending a message to /172.20.177.113 |                0 | 265401180125439 | 172.20.147.55 |          20013 | shard 5

 99eec980-47a2-11ec-bf75-8936f65a44d7 | 99f204da-47a2-11ec-bf75-8936f65a44d7 |                                                                                                                           Done processing - preparing a result |                0 | 265401180125439 | 172.20.147.55 |          20555 | shard 5

 99eec980-47a2-11ec-bf75-8936f65a44d7 | 99f23ca1-47a2-11ec-bf75-8936f65a44d7 |                                                                                                                   read_data: got response from /172.20.177.113 |                0 | 265401180125439 | 172.20.147.55 |          21983 | shard 5

Nadav Har'El

unread,
Nov 17, 2021, 8:11:53 AM11/17/21
to ScyllaDB users
On Wed, Nov 17, 2021 at 3:01 PM 'Cong Guo' via ScyllaDB users <scyllad...@googlegroups.com> wrote:
Thank you both. Actually I do scan randomly, normally for test reason at the loader initialization phase. So please forget it currently.

But when I try to select half columns, I got the trace below. Seems the time is spent on the communication between nodes?

What do you see that looks wrong?
I didn't understand how "half columns" differs from "all columns", and which one is better or worse?

Yes, scan needs to communicate between nodes because you sent the request to a random node (a scan theoretically needs all nodes, so CQL can't pick the "right one"),
but then it needs to start (and maybe end, depending how much data you have - you only asked for the first 5000 results) on some other specific node that holds the first partitions in token order.

I see the elapsed time for this was 20ms. That's not a huge amount of time. Especially if your nodes (what size are they?) have a lot of shards and all of them need to look for data for this can.
 

Cong Guo

unread,
Nov 17, 2021, 8:47:15 AM11/17/21
to ScyllaDB users
Hi sorry for that I may mislead you for the last trace I sent is not for scan, it's statement is like:
select a, b, c, d, ... from xxx where virtual_id='sss';

the "a, b, c, d, ...." series contain 50 columns while the whole column number is 100.

Cong Guo

unread,
Nov 17, 2021, 8:49:22 AM11/17/21
to ScyllaDB users
I understand scan will take long time and want to pinpoint why the no scan query cost 20+ms as well, and which part take the most part of the latency.

Cong Guo

unread,
Nov 17, 2021, 8:52:43 AM11/17/21
to ScyllaDB users
The trace says "read_data: querying locally", but then send message to a remote node, that kind of puzzled me, for the read CL is set to ONE.

Cong Guo

unread,
Nov 17, 2021, 9:00:09 AM11/17/21
to ScyllaDB users
Actually I don't think it's an issue, just interested in more details about the process logic ^-^. Anyway I will read the code as well when available.

Cong Guo

unread,
Nov 17, 2021, 9:08:26 AM11/17/21
to ScyllaDB users
The size is just around 4k. 
Reply all
Reply to author
Forward
0 new messages